./data/MUCAC/CelebAMask-HQ/CelebA-HQ-img
./data/MUCAC/CelebAMask-HQ/CelebA-HQ-img

📌 S Retain class distribution for seed 2:
Class 0: 5284
Class 1: 4210

📌 S Forget class distribution for seed 2:
Class 0: 527
Class 1: 527

📊 Updated class distribution:
Retain set:
  Class 0: 5547
  Class 1: 4473
Forget set:
  Class 0: 264
  Class 1: 264
./data/MUCAC/CelebAMask-HQ/CelebA-HQ-img
./data/MUCAC/CelebAMask-HQ/CelebA-HQ-img
⚠️ Warning: Retain train loader may not be shuffled.
Training Epoch: 1 [256/10020]	Loss: 0.6909	LR: 0.000000
Training Epoch: 1 [512/10020]	Loss: 0.6875	LR: 0.002500
Training Epoch: 1 [768/10020]	Loss: 0.6849	LR: 0.005000
Training Epoch: 1 [1024/10020]	Loss: 0.6994	LR: 0.007500
Training Epoch: 1 [1280/10020]	Loss: 0.7035	LR: 0.010000
Training Epoch: 1 [1536/10020]	Loss: 0.6889	LR: 0.012500
Training Epoch: 1 [1792/10020]	Loss: 0.6711	LR: 0.015000
Training Epoch: 1 [2048/10020]	Loss: 0.6671	LR: 0.017500
Training Epoch: 1 [2304/10020]	Loss: 0.6397	LR: 0.020000
Training Epoch: 1 [2560/10020]	Loss: 0.7472	LR: 0.022500
Training Epoch: 1 [2816/10020]	Loss: 0.7841	LR: 0.025000
Training Epoch: 1 [3072/10020]	Loss: 0.9161	LR: 0.027500
Training Epoch: 1 [3328/10020]	Loss: 0.7785	LR: 0.030000
Training Epoch: 1 [3584/10020]	Loss: 0.7424	LR: 0.032500
Training Epoch: 1 [3840/10020]	Loss: 0.6931	LR: 0.035000
Training Epoch: 1 [4096/10020]	Loss: 0.7265	LR: 0.037500
Training Epoch: 1 [4352/10020]	Loss: 1.1201	LR: 0.040000
Training Epoch: 1 [4608/10020]	Loss: 1.8799	LR: 0.042500
Training Epoch: 1 [4864/10020]	Loss: 0.7849	LR: 0.045000
Training Epoch: 1 [5120/10020]	Loss: 0.9243	LR: 0.047500
Training Epoch: 1 [5376/10020]	Loss: 1.0073	LR: 0.050000
Training Epoch: 1 [5632/10020]	Loss: 0.8339	LR: 0.052500
Training Epoch: 1 [5888/10020]	Loss: 0.9016	LR: 0.055000
Training Epoch: 1 [6144/10020]	Loss: 0.6982	LR: 0.057500
Training Epoch: 1 [6400/10020]	Loss: 0.8351	LR: 0.060000
Training Epoch: 1 [6656/10020]	Loss: 0.8345	LR: 0.062500
Training Epoch: 1 [6912/10020]	Loss: 0.7740	LR: 0.065000
Training Epoch: 1 [7168/10020]	Loss: 0.7375	LR: 0.067500
Training Epoch: 1 [7424/10020]	Loss: 0.9048	LR: 0.070000
Training Epoch: 1 [7680/10020]	Loss: 1.0289	LR: 0.072500
Training Epoch: 1 [7936/10020]	Loss: 0.8452	LR: 0.075000
Training Epoch: 1 [8192/10020]	Loss: 1.0499	LR: 0.077500
Training Epoch: 1 [8448/10020]	Loss: 0.7118	LR: 0.080000
Training Epoch: 1 [8704/10020]	Loss: 1.0094	LR: 0.082500
Training Epoch: 1 [8960/10020]	Loss: 0.7369	LR: 0.085000
Training Epoch: 1 [9216/10020]	Loss: 0.7330	LR: 0.087500
Training Epoch: 1 [9472/10020]	Loss: 0.7461	LR: 0.090000
Training Epoch: 1 [9728/10020]	Loss: 0.7599	LR: 0.092500
Training Epoch: 1 [9984/10020]	Loss: 0.7839	LR: 0.095000
Training Epoch: 1 [10020/10020]	Loss: 0.5900	LR: 0.097500
Epoch 1 - Average Train Loss: 0.8238, Train Accuracy: 0.5215
Epoch 1 training time consumed: 325.90s
Evaluating Network.....
Test set: Epoch: 1, Average loss: 0.0162, Accuracy: 0.5550, Time consumed:7.91s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-1-best.pth
Training Epoch: 2 [256/10020]	Loss: 0.7663	LR: 0.100000
Training Epoch: 2 [512/10020]	Loss: 0.7264	LR: 0.100000
Training Epoch: 2 [768/10020]	Loss: 0.7338	LR: 0.100000
Training Epoch: 2 [1024/10020]	Loss: 0.7260	LR: 0.100000
Training Epoch: 2 [1280/10020]	Loss: 0.6957	LR: 0.100000
Training Epoch: 2 [1536/10020]	Loss: 0.6889	LR: 0.100000
Training Epoch: 2 [1792/10020]	Loss: 0.7119	LR: 0.100000
Training Epoch: 2 [2048/10020]	Loss: 0.6945	LR: 0.100000
Training Epoch: 2 [2304/10020]	Loss: 0.6665	LR: 0.100000
Training Epoch: 2 [2560/10020]	Loss: 0.6975	LR: 0.100000
Training Epoch: 2 [2816/10020]	Loss: 0.6606	LR: 0.100000
Training Epoch: 2 [3072/10020]	Loss: 0.7361	LR: 0.100000
Training Epoch: 2 [3328/10020]	Loss: 0.6954	LR: 0.100000
Training Epoch: 2 [3584/10020]	Loss: 0.7002	LR: 0.100000
Training Epoch: 2 [3840/10020]	Loss: 0.7291	LR: 0.100000
Training Epoch: 2 [4096/10020]	Loss: 0.7497	LR: 0.100000
Training Epoch: 2 [4352/10020]	Loss: 0.6661	LR: 0.100000
Training Epoch: 2 [4608/10020]	Loss: 0.7415	LR: 0.100000
Training Epoch: 2 [4864/10020]	Loss: 0.6566	LR: 0.100000
Training Epoch: 2 [5120/10020]	Loss: 0.6751	LR: 0.100000
Training Epoch: 2 [5376/10020]	Loss: 0.6843	LR: 0.100000
Training Epoch: 2 [5632/10020]	Loss: 0.6746	LR: 0.100000
Training Epoch: 2 [5888/10020]	Loss: 0.6640	LR: 0.100000
Training Epoch: 2 [6144/10020]	Loss: 0.6486	LR: 0.100000
Training Epoch: 2 [6400/10020]	Loss: 0.6918	LR: 0.100000
Training Epoch: 2 [6656/10020]	Loss: 0.6643	LR: 0.100000
Training Epoch: 2 [6912/10020]	Loss: 0.6824	LR: 0.100000
Training Epoch: 2 [7168/10020]	Loss: 0.6996	LR: 0.100000
Training Epoch: 2 [7424/10020]	Loss: 0.6879	LR: 0.100000
Training Epoch: 2 [7680/10020]	Loss: 0.6870	LR: 0.100000
Training Epoch: 2 [7936/10020]	Loss: 0.6810	LR: 0.100000
Training Epoch: 2 [8192/10020]	Loss: 0.7104	LR: 0.100000
Training Epoch: 2 [8448/10020]	Loss: 0.6702	LR: 0.100000
Training Epoch: 2 [8704/10020]	Loss: 0.6688	LR: 0.100000
Training Epoch: 2 [8960/10020]	Loss: 0.6888	LR: 0.100000
Training Epoch: 2 [9216/10020]	Loss: 0.6933	LR: 0.100000
Training Epoch: 2 [9472/10020]	Loss: 0.6986	LR: 0.100000
Training Epoch: 2 [9728/10020]	Loss: 0.6798	LR: 0.100000
Training Epoch: 2 [9984/10020]	Loss: 0.6853	LR: 0.100000
Training Epoch: 2 [10020/10020]	Loss: 0.7062	LR: 0.100000
Epoch 2 - Average Train Loss: 0.6944, Train Accuracy: 0.5596
Epoch 2 training time consumed: 145.51s
Evaluating Network.....
Test set: Epoch: 2, Average loss: 0.0032, Accuracy: 0.5390, Time consumed:8.13s
Training Epoch: 3 [256/10020]	Loss: 0.7231	LR: 0.100000
Training Epoch: 3 [512/10020]	Loss: 0.7356	LR: 0.100000
Training Epoch: 3 [768/10020]	Loss: 0.7285	LR: 0.100000
Training Epoch: 3 [1024/10020]	Loss: 0.6996	LR: 0.100000
Training Epoch: 3 [1280/10020]	Loss: 0.7131	LR: 0.100000
Training Epoch: 3 [1536/10020]	Loss: 0.6883	LR: 0.100000
Training Epoch: 3 [1792/10020]	Loss: 0.6940	LR: 0.100000
Training Epoch: 3 [2048/10020]	Loss: 0.6440	LR: 0.100000
Training Epoch: 3 [2304/10020]	Loss: 0.6850	LR: 0.100000
Training Epoch: 3 [2560/10020]	Loss: 0.7202	LR: 0.100000
Training Epoch: 3 [2816/10020]	Loss: 0.6728	LR: 0.100000
Training Epoch: 3 [3072/10020]	Loss: 0.7119	LR: 0.100000
Training Epoch: 3 [3328/10020]	Loss: 0.6822	LR: 0.100000
Training Epoch: 3 [3584/10020]	Loss: 0.7541	LR: 0.100000
Training Epoch: 3 [3840/10020]	Loss: 0.7389	LR: 0.100000
Training Epoch: 3 [4096/10020]	Loss: 0.6670	LR: 0.100000
Training Epoch: 3 [4352/10020]	Loss: 0.7171	LR: 0.100000
Training Epoch: 3 [4608/10020]	Loss: 0.6589	LR: 0.100000
Training Epoch: 3 [4864/10020]	Loss: 0.7075	LR: 0.100000
Training Epoch: 3 [5120/10020]	Loss: 0.6923	LR: 0.100000
Training Epoch: 3 [5376/10020]	Loss: 0.6932	LR: 0.100000
Training Epoch: 3 [5632/10020]	Loss: 0.6876	LR: 0.100000
Training Epoch: 3 [5888/10020]	Loss: 0.7323	LR: 0.100000
Training Epoch: 3 [6144/10020]	Loss: 0.6816	LR: 0.100000
Training Epoch: 3 [6400/10020]	Loss: 0.6767	LR: 0.100000
Training Epoch: 3 [6656/10020]	Loss: 0.6847	LR: 0.100000
Training Epoch: 3 [6912/10020]	Loss: 0.7122	LR: 0.100000
Training Epoch: 3 [7168/10020]	Loss: 0.6805	LR: 0.100000
Training Epoch: 3 [7424/10020]	Loss: 0.7046	LR: 0.100000
Training Epoch: 3 [7680/10020]	Loss: 0.6904	LR: 0.100000
Training Epoch: 3 [7936/10020]	Loss: 0.7057	LR: 0.100000
Training Epoch: 3 [8192/10020]	Loss: 0.6932	LR: 0.100000
Training Epoch: 3 [8448/10020]	Loss: 0.6807	LR: 0.100000
Training Epoch: 3 [8704/10020]	Loss: 0.7102	LR: 0.100000
Training Epoch: 3 [8960/10020]	Loss: 0.6779	LR: 0.100000
Training Epoch: 3 [9216/10020]	Loss: 0.7080	LR: 0.100000
Training Epoch: 3 [9472/10020]	Loss: 0.6846	LR: 0.100000
Training Epoch: 3 [9728/10020]	Loss: 0.6814	LR: 0.100000
Training Epoch: 3 [9984/10020]	Loss: 0.6477	LR: 0.100000
Training Epoch: 3 [10020/10020]	Loss: 0.6873	LR: 0.100000
Epoch 3 - Average Train Loss: 0.6966, Train Accuracy: 0.5704
Epoch 3 training time consumed: 144.91s
Evaluating Network.....
Test set: Epoch: 3, Average loss: 0.0030, Accuracy: 0.5768, Time consumed:8.01s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-3-best.pth
Training Epoch: 4 [256/10020]	Loss: 0.6956	LR: 0.100000
Training Epoch: 4 [512/10020]	Loss: 0.7246	LR: 0.100000
Training Epoch: 4 [768/10020]	Loss: 0.7027	LR: 0.100000
Training Epoch: 4 [1024/10020]	Loss: 0.6695	LR: 0.100000
Training Epoch: 4 [1280/10020]	Loss: 0.7389	LR: 0.100000
Training Epoch: 4 [1536/10020]	Loss: 0.7402	LR: 0.100000
Training Epoch: 4 [1792/10020]	Loss: 0.6745	LR: 0.100000
Training Epoch: 4 [2048/10020]	Loss: 0.6840	LR: 0.100000
Training Epoch: 4 [2304/10020]	Loss: 0.7124	LR: 0.100000
Training Epoch: 4 [2560/10020]	Loss: 0.7053	LR: 0.100000
Training Epoch: 4 [2816/10020]	Loss: 0.6719	LR: 0.100000
Training Epoch: 4 [3072/10020]	Loss: 0.6948	LR: 0.100000
Training Epoch: 4 [3328/10020]	Loss: 0.6803	LR: 0.100000
Training Epoch: 4 [3584/10020]	Loss: 0.6704	LR: 0.100000
Training Epoch: 4 [3840/10020]	Loss: 0.7079	LR: 0.100000
Training Epoch: 4 [4096/10020]	Loss: 0.6792	LR: 0.100000
Training Epoch: 4 [4352/10020]	Loss: 0.6957	LR: 0.100000
Training Epoch: 4 [4608/10020]	Loss: 0.6786	LR: 0.100000
Training Epoch: 4 [4864/10020]	Loss: 0.6696	LR: 0.100000
Training Epoch: 4 [5120/10020]	Loss: 0.6763	LR: 0.100000
Training Epoch: 4 [5376/10020]	Loss: 0.6588	LR: 0.100000
Training Epoch: 4 [5632/10020]	Loss: 0.6816	LR: 0.100000
Training Epoch: 4 [5888/10020]	Loss: 0.6902	LR: 0.100000
Training Epoch: 4 [6144/10020]	Loss: 0.6744	LR: 0.100000
Training Epoch: 4 [6400/10020]	Loss: 0.6685	LR: 0.100000
Training Epoch: 4 [6656/10020]	Loss: 0.6700	LR: 0.100000
Training Epoch: 4 [6912/10020]	Loss: 0.6767	LR: 0.100000
Training Epoch: 4 [7168/10020]	Loss: 0.6789	LR: 0.100000
Training Epoch: 4 [7424/10020]	Loss: 0.6468	LR: 0.100000
Training Epoch: 4 [7680/10020]	Loss: 0.6935	LR: 0.100000
Training Epoch: 4 [7936/10020]	Loss: 0.6809	LR: 0.100000
Training Epoch: 4 [8192/10020]	Loss: 0.6744	LR: 0.100000
Training Epoch: 4 [8448/10020]	Loss: 0.6545	LR: 0.100000
Training Epoch: 4 [8704/10020]	Loss: 0.6711	LR: 0.100000
Training Epoch: 4 [8960/10020]	Loss: 0.6505	LR: 0.100000
Training Epoch: 4 [9216/10020]	Loss: 0.6509	LR: 0.100000
Training Epoch: 4 [9472/10020]	Loss: 0.7132	LR: 0.100000
Training Epoch: 4 [9728/10020]	Loss: 0.6944	LR: 0.100000
Training Epoch: 4 [9984/10020]	Loss: 0.7020	LR: 0.100000
Training Epoch: 4 [10020/10020]	Loss: 0.5688	LR: 0.100000
Epoch 4 - Average Train Loss: 0.6843, Train Accuracy: 0.5740
Epoch 4 training time consumed: 144.93s
Evaluating Network.....
Test set: Epoch: 4, Average loss: 0.0031, Accuracy: 0.5661, Time consumed:8.01s
Training Epoch: 5 [256/10020]	Loss: 0.8059	LR: 0.100000
Training Epoch: 5 [512/10020]	Loss: 0.7518	LR: 0.100000
Training Epoch: 5 [768/10020]	Loss: 0.6910	LR: 0.100000
Training Epoch: 5 [1024/10020]	Loss: 0.7318	LR: 0.100000
Training Epoch: 5 [1280/10020]	Loss: 0.7297	LR: 0.100000
Training Epoch: 5 [1536/10020]	Loss: 0.7511	LR: 0.100000
Training Epoch: 5 [1792/10020]	Loss: 0.7038	LR: 0.100000
Training Epoch: 5 [2048/10020]	Loss: 0.6911	LR: 0.100000
Training Epoch: 5 [2304/10020]	Loss: 0.6835	LR: 0.100000
Training Epoch: 5 [2560/10020]	Loss: 0.7058	LR: 0.100000
Training Epoch: 5 [2816/10020]	Loss: 0.7144	LR: 0.100000
Training Epoch: 5 [3072/10020]	Loss: 0.7175	LR: 0.100000
Training Epoch: 5 [3328/10020]	Loss: 0.7096	LR: 0.100000
Training Epoch: 5 [3584/10020]	Loss: 0.6650	LR: 0.100000
Training Epoch: 5 [3840/10020]	Loss: 0.6755	LR: 0.100000
Training Epoch: 5 [4096/10020]	Loss: 0.6710	LR: 0.100000
Training Epoch: 5 [4352/10020]	Loss: 0.6741	LR: 0.100000
Training Epoch: 5 [4608/10020]	Loss: 0.6932	LR: 0.100000
Training Epoch: 5 [4864/10020]	Loss: 0.6909	LR: 0.100000
Training Epoch: 5 [5120/10020]	Loss: 0.6682	LR: 0.100000
Training Epoch: 5 [5376/10020]	Loss: 0.6853	LR: 0.100000
Training Epoch: 5 [5632/10020]	Loss: 0.7004	LR: 0.100000
Training Epoch: 5 [5888/10020]	Loss: 0.6953	LR: 0.100000
Training Epoch: 5 [6144/10020]	Loss: 0.6662	LR: 0.100000
Training Epoch: 5 [6400/10020]	Loss: 0.6979	LR: 0.100000
Training Epoch: 5 [6656/10020]	Loss: 0.6848	LR: 0.100000
Training Epoch: 5 [6912/10020]	Loss: 0.6991	LR: 0.100000
Training Epoch: 5 [7168/10020]	Loss: 0.6804	LR: 0.100000
Training Epoch: 5 [7424/10020]	Loss: 0.6527	LR: 0.100000
Training Epoch: 5 [7680/10020]	Loss: 0.6957	LR: 0.100000
Training Epoch: 5 [7936/10020]	Loss: 0.6581	LR: 0.100000
Training Epoch: 5 [8192/10020]	Loss: 0.6689	LR: 0.100000
Training Epoch: 5 [8448/10020]	Loss: 0.6688	LR: 0.100000
Training Epoch: 5 [8704/10020]	Loss: 0.6992	LR: 0.100000
Training Epoch: 5 [8960/10020]	Loss: 0.6668	LR: 0.100000
Training Epoch: 5 [9216/10020]	Loss: 0.6698	LR: 0.100000
Training Epoch: 5 [9472/10020]	Loss: 0.6938	LR: 0.100000
Training Epoch: 5 [9728/10020]	Loss: 0.6573	LR: 0.100000
Training Epoch: 5 [9984/10020]	Loss: 0.6683	LR: 0.100000
Training Epoch: 5 [10020/10020]	Loss: 0.6945	LR: 0.100000
Epoch 5 - Average Train Loss: 0.6932, Train Accuracy: 0.5536
Epoch 5 training time consumed: 145.12s
Evaluating Network.....
Test set: Epoch: 5, Average loss: 0.0034, Accuracy: 0.4712, Time consumed:8.13s
Training Epoch: 6 [256/10020]	Loss: 0.6673	LR: 0.100000
Training Epoch: 6 [512/10020]	Loss: 0.6879	LR: 0.100000
Training Epoch: 6 [768/10020]	Loss: 0.6813	LR: 0.100000
Training Epoch: 6 [1024/10020]	Loss: 0.6828	LR: 0.100000
Training Epoch: 6 [1280/10020]	Loss: 0.6892	LR: 0.100000
Training Epoch: 6 [1536/10020]	Loss: 0.6895	LR: 0.100000
Training Epoch: 6 [1792/10020]	Loss: 0.6483	LR: 0.100000
Training Epoch: 6 [2048/10020]	Loss: 0.6702	LR: 0.100000
Training Epoch: 6 [2304/10020]	Loss: 0.6675	LR: 0.100000
Training Epoch: 6 [2560/10020]	Loss: 0.6581	LR: 0.100000
Training Epoch: 6 [2816/10020]	Loss: 0.6905	LR: 0.100000
Training Epoch: 6 [3072/10020]	Loss: 0.6617	LR: 0.100000
Training Epoch: 6 [3328/10020]	Loss: 0.6746	LR: 0.100000
Training Epoch: 6 [3584/10020]	Loss: 0.6701	LR: 0.100000
Training Epoch: 6 [3840/10020]	Loss: 0.6661	LR: 0.100000
Training Epoch: 6 [4096/10020]	Loss: 0.6649	LR: 0.100000
Training Epoch: 6 [4352/10020]	Loss: 0.6612	LR: 0.100000
Training Epoch: 6 [4608/10020]	Loss: 0.6654	LR: 0.100000
Training Epoch: 6 [4864/10020]	Loss: 0.6786	LR: 0.100000
Training Epoch: 6 [5120/10020]	Loss: 0.6922	LR: 0.100000
Training Epoch: 6 [5376/10020]	Loss: 0.6537	LR: 0.100000
Training Epoch: 6 [5632/10020]	Loss: 0.7011	LR: 0.100000
Training Epoch: 6 [5888/10020]	Loss: 0.6540	LR: 0.100000
Training Epoch: 6 [6144/10020]	Loss: 0.6576	LR: 0.100000
Training Epoch: 6 [6400/10020]	Loss: 0.6694	LR: 0.100000
Training Epoch: 6 [6656/10020]	Loss: 0.6698	LR: 0.100000
Training Epoch: 6 [6912/10020]	Loss: 0.6625	LR: 0.100000
Training Epoch: 6 [7168/10020]	Loss: 0.6898	LR: 0.100000
Training Epoch: 6 [7424/10020]	Loss: 0.6624	LR: 0.100000
Training Epoch: 6 [7680/10020]	Loss: 0.6596	LR: 0.100000
Training Epoch: 6 [7936/10020]	Loss: 0.6954	LR: 0.100000
Training Epoch: 6 [8192/10020]	Loss: 0.6800	LR: 0.100000
Training Epoch: 6 [8448/10020]	Loss: 0.7000	LR: 0.100000
Training Epoch: 6 [8704/10020]	Loss: 0.6716	LR: 0.100000
Training Epoch: 6 [8960/10020]	Loss: 0.6815	LR: 0.100000
Training Epoch: 6 [9216/10020]	Loss: 0.6854	LR: 0.100000
Training Epoch: 6 [9472/10020]	Loss: 0.6705	LR: 0.100000
Training Epoch: 6 [9728/10020]	Loss: 0.6738	LR: 0.100000
Training Epoch: 6 [9984/10020]	Loss: 0.6722	LR: 0.100000
Training Epoch: 6 [10020/10020]	Loss: 0.6218	LR: 0.100000
Epoch 6 - Average Train Loss: 0.6736, Train Accuracy: 0.5935
Epoch 6 training time consumed: 145.16s
Evaluating Network.....
Test set: Epoch: 6, Average loss: 0.0030, Accuracy: 0.5835, Time consumed:8.09s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-6-best.pth
Training Epoch: 7 [256/10020]	Loss: 0.6701	LR: 0.100000
Training Epoch: 7 [512/10020]	Loss: 0.6948	LR: 0.100000
Training Epoch: 7 [768/10020]	Loss: 0.7006	LR: 0.100000
Training Epoch: 7 [1024/10020]	Loss: 0.6799	LR: 0.100000
Training Epoch: 7 [1280/10020]	Loss: 0.6718	LR: 0.100000
Training Epoch: 7 [1536/10020]	Loss: 0.6430	LR: 0.100000
Training Epoch: 7 [1792/10020]	Loss: 0.6833	LR: 0.100000
Training Epoch: 7 [2048/10020]	Loss: 0.6758	LR: 0.100000
Training Epoch: 7 [2304/10020]	Loss: 0.6657	LR: 0.100000
Training Epoch: 7 [2560/10020]	Loss: 0.6710	LR: 0.100000
Training Epoch: 7 [2816/10020]	Loss: 0.6684	LR: 0.100000
Training Epoch: 7 [3072/10020]	Loss: 0.6585	LR: 0.100000
Training Epoch: 7 [3328/10020]	Loss: 0.6655	LR: 0.100000
Training Epoch: 7 [3584/10020]	Loss: 0.6791	LR: 0.100000
Training Epoch: 7 [3840/10020]	Loss: 0.6618	LR: 0.100000
Training Epoch: 7 [4096/10020]	Loss: 0.6304	LR: 0.100000
Training Epoch: 7 [4352/10020]	Loss: 0.6531	LR: 0.100000
Training Epoch: 7 [4608/10020]	Loss: 0.6627	LR: 0.100000
Training Epoch: 7 [4864/10020]	Loss: 0.6328	LR: 0.100000
Training Epoch: 7 [5120/10020]	Loss: 0.6829	LR: 0.100000
Training Epoch: 7 [5376/10020]	Loss: 0.6548	LR: 0.100000
Training Epoch: 7 [5632/10020]	Loss: 0.6767	LR: 0.100000
Training Epoch: 7 [5888/10020]	Loss: 0.6548	LR: 0.100000
Training Epoch: 7 [6144/10020]	Loss: 0.6520	LR: 0.100000
Training Epoch: 7 [6400/10020]	Loss: 0.6635	LR: 0.100000
Training Epoch: 7 [6656/10020]	Loss: 0.6796	LR: 0.100000
Training Epoch: 7 [6912/10020]	Loss: 0.6708	LR: 0.100000
Training Epoch: 7 [7168/10020]	Loss: 0.6861	LR: 0.100000
Training Epoch: 7 [7424/10020]	Loss: 0.6521	LR: 0.100000
Training Epoch: 7 [7680/10020]	Loss: 0.6829	LR: 0.100000
Training Epoch: 7 [7936/10020]	Loss: 0.6502	LR: 0.100000
Training Epoch: 7 [8192/10020]	Loss: 0.6690	LR: 0.100000
Training Epoch: 7 [8448/10020]	Loss: 0.6500	LR: 0.100000
Training Epoch: 7 [8704/10020]	Loss: 0.6674	LR: 0.100000
Training Epoch: 7 [8960/10020]	Loss: 0.6690	LR: 0.100000
Training Epoch: 7 [9216/10020]	Loss: 0.6626	LR: 0.100000
Training Epoch: 7 [9472/10020]	Loss: 0.6392	LR: 0.100000
Training Epoch: 7 [9728/10020]	Loss: 0.6743	LR: 0.100000
Training Epoch: 7 [9984/10020]	Loss: 0.6482	LR: 0.100000
Training Epoch: 7 [10020/10020]	Loss: 0.6281	LR: 0.100000
Epoch 7 - Average Train Loss: 0.6654, Train Accuracy: 0.6009
Epoch 7 training time consumed: 145.07s
Evaluating Network.....
Test set: Epoch: 7, Average loss: 0.0029, Accuracy: 0.6048, Time consumed:8.21s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-7-best.pth
Training Epoch: 8 [256/10020]	Loss: 0.6734	LR: 0.100000
Training Epoch: 8 [512/10020]	Loss: 0.6843	LR: 0.100000
Training Epoch: 8 [768/10020]	Loss: 0.6647	LR: 0.100000
Training Epoch: 8 [1024/10020]	Loss: 0.6518	LR: 0.100000
Training Epoch: 8 [1280/10020]	Loss: 0.6665	LR: 0.100000
Training Epoch: 8 [1536/10020]	Loss: 0.6343	LR: 0.100000
Training Epoch: 8 [1792/10020]	Loss: 0.6860	LR: 0.100000
Training Epoch: 8 [2048/10020]	Loss: 0.6756	LR: 0.100000
Training Epoch: 8 [2304/10020]	Loss: 0.6477	LR: 0.100000
Training Epoch: 8 [2560/10020]	Loss: 0.6449	LR: 0.100000
Training Epoch: 8 [2816/10020]	Loss: 0.6800	LR: 0.100000
Training Epoch: 8 [3072/10020]	Loss: 0.6808	LR: 0.100000
Training Epoch: 8 [3328/10020]	Loss: 0.6724	LR: 0.100000
Training Epoch: 8 [3584/10020]	Loss: 0.6509	LR: 0.100000
Training Epoch: 8 [3840/10020]	Loss: 0.6795	LR: 0.100000
Training Epoch: 8 [4096/10020]	Loss: 0.6776	LR: 0.100000
Training Epoch: 8 [4352/10020]	Loss: 0.6691	LR: 0.100000
Training Epoch: 8 [4608/10020]	Loss: 0.6539	LR: 0.100000
Training Epoch: 8 [4864/10020]	Loss: 0.6579	LR: 0.100000
Training Epoch: 8 [5120/10020]	Loss: 0.6550	LR: 0.100000
Training Epoch: 8 [5376/10020]	Loss: 0.6552	LR: 0.100000
Training Epoch: 8 [5632/10020]	Loss: 0.6729	LR: 0.100000
Training Epoch: 8 [5888/10020]	Loss: 0.6789	LR: 0.100000
Training Epoch: 8 [6144/10020]	Loss: 0.6698	LR: 0.100000
Training Epoch: 8 [6400/10020]	Loss: 0.6493	LR: 0.100000
Training Epoch: 8 [6656/10020]	Loss: 0.6598	LR: 0.100000
Training Epoch: 8 [6912/10020]	Loss: 0.6487	LR: 0.100000
Training Epoch: 8 [7168/10020]	Loss: 0.6805	LR: 0.100000
Training Epoch: 8 [7424/10020]	Loss: 0.6558	LR: 0.100000
Training Epoch: 8 [7680/10020]	Loss: 0.6311	LR: 0.100000
Training Epoch: 8 [7936/10020]	Loss: 0.6739	LR: 0.100000
Training Epoch: 8 [8192/10020]	Loss: 0.6553	LR: 0.100000
Training Epoch: 8 [8448/10020]	Loss: 0.6409	LR: 0.100000
Training Epoch: 8 [8704/10020]	Loss: 0.6750	LR: 0.100000
Training Epoch: 8 [8960/10020]	Loss: 0.6882	LR: 0.100000
Training Epoch: 8 [9216/10020]	Loss: 0.6588	LR: 0.100000
Training Epoch: 8 [9472/10020]	Loss: 0.6666	LR: 0.100000
Training Epoch: 8 [9728/10020]	Loss: 0.6568	LR: 0.100000
Training Epoch: 8 [9984/10020]	Loss: 0.6669	LR: 0.100000
Training Epoch: 8 [10020/10020]	Loss: 0.7762	LR: 0.100000
Epoch 8 - Average Train Loss: 0.6643, Train Accuracy: 0.6052
Epoch 8 training time consumed: 144.98s
Evaluating Network.....
Test set: Epoch: 8, Average loss: 0.0029, Accuracy: 0.6358, Time consumed:7.96s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-8-best.pth
Training Epoch: 9 [256/10020]	Loss: 0.6487	LR: 0.100000
Training Epoch: 9 [512/10020]	Loss: 0.6575	LR: 0.100000
Training Epoch: 9 [768/10020]	Loss: 0.6658	LR: 0.100000
Training Epoch: 9 [1024/10020]	Loss: 0.6775	LR: 0.100000
Training Epoch: 9 [1280/10020]	Loss: 0.7062	LR: 0.100000
Training Epoch: 9 [1536/10020]	Loss: 0.6594	LR: 0.100000
Training Epoch: 9 [1792/10020]	Loss: 0.6775	LR: 0.100000
Training Epoch: 9 [2048/10020]	Loss: 0.6850	LR: 0.100000
Training Epoch: 9 [2304/10020]	Loss: 0.6843	LR: 0.100000
Training Epoch: 9 [2560/10020]	Loss: 0.6672	LR: 0.100000
Training Epoch: 9 [2816/10020]	Loss: 0.6419	LR: 0.100000
Training Epoch: 9 [3072/10020]	Loss: 0.6664	LR: 0.100000
Training Epoch: 9 [3328/10020]	Loss: 0.6686	LR: 0.100000
Training Epoch: 9 [3584/10020]	Loss: 0.6739	LR: 0.100000
Training Epoch: 9 [3840/10020]	Loss: 0.6564	LR: 0.100000
Training Epoch: 9 [4096/10020]	Loss: 0.6378	LR: 0.100000
Training Epoch: 9 [4352/10020]	Loss: 0.6330	LR: 0.100000
Training Epoch: 9 [4608/10020]	Loss: 0.6624	LR: 0.100000
Training Epoch: 9 [4864/10020]	Loss: 0.6502	LR: 0.100000
Training Epoch: 9 [5120/10020]	Loss: 0.6558	LR: 0.100000
Training Epoch: 9 [5376/10020]	Loss: 0.6190	LR: 0.100000
Training Epoch: 9 [5632/10020]	Loss: 0.6258	LR: 0.100000
Training Epoch: 9 [5888/10020]	Loss: 0.6363	LR: 0.100000
Training Epoch: 9 [6144/10020]	Loss: 0.6676	LR: 0.100000
Training Epoch: 9 [6400/10020]	Loss: 0.6268	LR: 0.100000
Training Epoch: 9 [6656/10020]	Loss: 0.6229	LR: 0.100000
Training Epoch: 9 [6912/10020]	Loss: 0.6748	LR: 0.100000
Training Epoch: 9 [7168/10020]	Loss: 0.6519	LR: 0.100000
Training Epoch: 9 [7424/10020]	Loss: 0.6791	LR: 0.100000
Training Epoch: 9 [7680/10020]	Loss: 0.6339	LR: 0.100000
Training Epoch: 9 [7936/10020]	Loss: 0.6578	LR: 0.100000
Training Epoch: 9 [8192/10020]	Loss: 0.6554	LR: 0.100000
Training Epoch: 9 [8448/10020]	Loss: 0.6392	LR: 0.100000
Training Epoch: 9 [8704/10020]	Loss: 0.6296	LR: 0.100000
Training Epoch: 9 [8960/10020]	Loss: 0.6911	LR: 0.100000
Training Epoch: 9 [9216/10020]	Loss: 0.6161	LR: 0.100000
Training Epoch: 9 [9472/10020]	Loss: 0.6373	LR: 0.100000
Training Epoch: 9 [9728/10020]	Loss: 0.5990	LR: 0.100000
Training Epoch: 9 [9984/10020]	Loss: 0.6029	LR: 0.100000
Training Epoch: 9 [10020/10020]	Loss: 0.6815	LR: 0.100000
Epoch 9 - Average Train Loss: 0.6525, Train Accuracy: 0.6178
Epoch 9 training time consumed: 145.05s
Evaluating Network.....
Test set: Epoch: 9, Average loss: 0.0033, Accuracy: 0.5511, Time consumed:8.16s
Training Epoch: 10 [256/10020]	Loss: 0.6188	LR: 0.020000
Training Epoch: 10 [512/10020]	Loss: 0.6123	LR: 0.020000
Training Epoch: 10 [768/10020]	Loss: 0.6712	LR: 0.020000
Training Epoch: 10 [1024/10020]	Loss: 0.6406	LR: 0.020000
Training Epoch: 10 [1280/10020]	Loss: 0.6181	LR: 0.020000
Training Epoch: 10 [1536/10020]	Loss: 0.6037	LR: 0.020000
Training Epoch: 10 [1792/10020]	Loss: 0.6451	LR: 0.020000
Training Epoch: 10 [2048/10020]	Loss: 0.6037	LR: 0.020000
Training Epoch: 10 [2304/10020]	Loss: 0.6405	LR: 0.020000
Training Epoch: 10 [2560/10020]	Loss: 0.6305	LR: 0.020000
Training Epoch: 10 [2816/10020]	Loss: 0.6110	LR: 0.020000
Training Epoch: 10 [3072/10020]	Loss: 0.6614	LR: 0.020000
Training Epoch: 10 [3328/10020]	Loss: 0.5866	LR: 0.020000
Training Epoch: 10 [3584/10020]	Loss: 0.6177	LR: 0.020000
Training Epoch: 10 [3840/10020]	Loss: 0.5986	LR: 0.020000
Training Epoch: 10 [4096/10020]	Loss: 0.5873	LR: 0.020000
Training Epoch: 10 [4352/10020]	Loss: 0.6089	LR: 0.020000
Training Epoch: 10 [4608/10020]	Loss: 0.5843	LR: 0.020000
Training Epoch: 10 [4864/10020]	Loss: 0.5742	LR: 0.020000
Training Epoch: 10 [5120/10020]	Loss: 0.5742	LR: 0.020000
Training Epoch: 10 [5376/10020]	Loss: 0.6184	LR: 0.020000
Training Epoch: 10 [5632/10020]	Loss: 0.6038	LR: 0.020000
Training Epoch: 10 [5888/10020]	Loss: 0.6344	LR: 0.020000
Training Epoch: 10 [6144/10020]	Loss: 0.6413	LR: 0.020000
Training Epoch: 10 [6400/10020]	Loss: 0.6092	LR: 0.020000
Training Epoch: 10 [6656/10020]	Loss: 0.6078	LR: 0.020000
Training Epoch: 10 [6912/10020]	Loss: 0.6131	LR: 0.020000
Training Epoch: 10 [7168/10020]	Loss: 0.5992	LR: 0.020000
Training Epoch: 10 [7424/10020]	Loss: 0.6517	LR: 0.020000
Training Epoch: 10 [7680/10020]	Loss: 0.5777	LR: 0.020000
Training Epoch: 10 [7936/10020]	Loss: 0.5866	LR: 0.020000
Training Epoch: 10 [8192/10020]	Loss: 0.5877	LR: 0.020000
Training Epoch: 10 [8448/10020]	Loss: 0.5928	LR: 0.020000
Training Epoch: 10 [8704/10020]	Loss: 0.6303	LR: 0.020000
Training Epoch: 10 [8960/10020]	Loss: 0.5694	LR: 0.020000
Training Epoch: 10 [9216/10020]	Loss: 0.6219	LR: 0.020000
Training Epoch: 10 [9472/10020]	Loss: 0.5745	LR: 0.020000
Training Epoch: 10 [9728/10020]	Loss: 0.5852	LR: 0.020000
Training Epoch: 10 [9984/10020]	Loss: 0.5697	LR: 0.020000
Training Epoch: 10 [10020/10020]	Loss: 0.5009	LR: 0.020000
Epoch 10 - Average Train Loss: 0.6089, Train Accuracy: 0.6743
Epoch 10 training time consumed: 144.69s
Evaluating Network.....
Test set: Epoch: 10, Average loss: 0.0027, Accuracy: 0.6828, Time consumed:8.07s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-10-best.pth
Training Epoch: 11 [256/10020]	Loss: 0.5601	LR: 0.020000
Training Epoch: 11 [512/10020]	Loss: 0.5772	LR: 0.020000
Training Epoch: 11 [768/10020]	Loss: 0.5803	LR: 0.020000
Training Epoch: 11 [1024/10020]	Loss: 0.5482	LR: 0.020000
Training Epoch: 11 [1280/10020]	Loss: 0.5278	LR: 0.020000
Training Epoch: 11 [1536/10020]	Loss: 0.6027	LR: 0.020000
Training Epoch: 11 [1792/10020]	Loss: 0.5569	LR: 0.020000
Training Epoch: 11 [2048/10020]	Loss: 0.5540	LR: 0.020000
Training Epoch: 11 [2304/10020]	Loss: 0.5814	LR: 0.020000
Training Epoch: 11 [2560/10020]	Loss: 0.6340	LR: 0.020000
Training Epoch: 11 [2816/10020]	Loss: 0.5684	LR: 0.020000
Training Epoch: 11 [3072/10020]	Loss: 0.5852	LR: 0.020000
Training Epoch: 11 [3328/10020]	Loss: 0.5569	LR: 0.020000
Training Epoch: 11 [3584/10020]	Loss: 0.6277	LR: 0.020000
Training Epoch: 11 [3840/10020]	Loss: 0.6134	LR: 0.020000
Training Epoch: 11 [4096/10020]	Loss: 0.5618	LR: 0.020000
Training Epoch: 11 [4352/10020]	Loss: 0.5949	LR: 0.020000
Training Epoch: 11 [4608/10020]	Loss: 0.5864	LR: 0.020000
Training Epoch: 11 [4864/10020]	Loss: 0.6374	LR: 0.020000
Training Epoch: 11 [5120/10020]	Loss: 0.5889	LR: 0.020000
Training Epoch: 11 [5376/10020]	Loss: 0.6051	LR: 0.020000
Training Epoch: 11 [5632/10020]	Loss: 0.5564	LR: 0.020000
Training Epoch: 11 [5888/10020]	Loss: 0.5735	LR: 0.020000
Training Epoch: 11 [6144/10020]	Loss: 0.6227	LR: 0.020000
Training Epoch: 11 [6400/10020]	Loss: 0.5813	LR: 0.020000
Training Epoch: 11 [6656/10020]	Loss: 0.5506	LR: 0.020000
Training Epoch: 11 [6912/10020]	Loss: 0.5760	LR: 0.020000
Training Epoch: 11 [7168/10020]	Loss: 0.5702	LR: 0.020000
Training Epoch: 11 [7424/10020]	Loss: 0.6369	LR: 0.020000
Training Epoch: 11 [7680/10020]	Loss: 0.5716	LR: 0.020000
Training Epoch: 11 [7936/10020]	Loss: 0.5420	LR: 0.020000
Training Epoch: 11 [8192/10020]	Loss: 0.5818	LR: 0.020000
Training Epoch: 11 [8448/10020]	Loss: 0.5576	LR: 0.020000
Training Epoch: 11 [8704/10020]	Loss: 0.5717	LR: 0.020000
Training Epoch: 11 [8960/10020]	Loss: 0.5978	LR: 0.020000
Training Epoch: 11 [9216/10020]	Loss: 0.5408	LR: 0.020000
Training Epoch: 11 [9472/10020]	Loss: 0.6031	LR: 0.020000
Training Epoch: 11 [9728/10020]	Loss: 0.6029	LR: 0.020000
Training Epoch: 11 [9984/10020]	Loss: 0.5334	LR: 0.020000
Training Epoch: 11 [10020/10020]	Loss: 0.4668	LR: 0.020000
Epoch 11 - Average Train Loss: 0.5796, Train Accuracy: 0.6916
Epoch 11 training time consumed: 144.37s
Evaluating Network.....
Test set: Epoch: 11, Average loss: 0.0024, Accuracy: 0.7467, Time consumed:8.12s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-11-best.pth
Training Epoch: 12 [256/10020]	Loss: 0.5401	LR: 0.020000
Training Epoch: 12 [512/10020]	Loss: 0.5843	LR: 0.020000
Training Epoch: 12 [768/10020]	Loss: 0.5515	LR: 0.020000
Training Epoch: 12 [1024/10020]	Loss: 0.5674	LR: 0.020000
Training Epoch: 12 [1280/10020]	Loss: 0.5349	LR: 0.020000
Training Epoch: 12 [1536/10020]	Loss: 0.5428	LR: 0.020000
Training Epoch: 12 [1792/10020]	Loss: 0.5618	LR: 0.020000
Training Epoch: 12 [2048/10020]	Loss: 0.5554	LR: 0.020000
Training Epoch: 12 [2304/10020]	Loss: 0.5537	LR: 0.020000
Training Epoch: 12 [2560/10020]	Loss: 0.5095	LR: 0.020000
Training Epoch: 12 [2816/10020]	Loss: 0.5266	LR: 0.020000
Training Epoch: 12 [3072/10020]	Loss: 0.5209	LR: 0.020000
Training Epoch: 12 [3328/10020]	Loss: 0.5189	LR: 0.020000
Training Epoch: 12 [3584/10020]	Loss: 0.5724	LR: 0.020000
Training Epoch: 12 [3840/10020]	Loss: 0.4862	LR: 0.020000
Training Epoch: 12 [4096/10020]	Loss: 0.5291	LR: 0.020000
Training Epoch: 12 [4352/10020]	Loss: 0.5519	LR: 0.020000
Training Epoch: 12 [4608/10020]	Loss: 0.4637	LR: 0.020000
Training Epoch: 12 [4864/10020]	Loss: 0.5698	LR: 0.020000
Training Epoch: 12 [5120/10020]	Loss: 0.5480	LR: 0.020000
Training Epoch: 12 [5376/10020]	Loss: 0.4480	LR: 0.020000
Training Epoch: 12 [5632/10020]	Loss: 0.5216	LR: 0.020000
Training Epoch: 12 [5888/10020]	Loss: 0.5139	LR: 0.020000
Training Epoch: 12 [6144/10020]	Loss: 0.4923	LR: 0.020000
Training Epoch: 12 [6400/10020]	Loss: 0.4771	LR: 0.020000
Training Epoch: 12 [6656/10020]	Loss: 0.4695	LR: 0.020000
Training Epoch: 12 [6912/10020]	Loss: 0.5066	LR: 0.020000
Training Epoch: 12 [7168/10020]	Loss: 0.5116	LR: 0.020000
Training Epoch: 12 [7424/10020]	Loss: 0.4516	LR: 0.020000
Training Epoch: 12 [7680/10020]	Loss: 0.4755	LR: 0.020000
Training Epoch: 12 [7936/10020]	Loss: 0.4755	LR: 0.020000
Training Epoch: 12 [8192/10020]	Loss: 0.4601	LR: 0.020000
Training Epoch: 12 [8448/10020]	Loss: 0.4981	LR: 0.020000
Training Epoch: 12 [8704/10020]	Loss: 0.4317	LR: 0.020000
Training Epoch: 12 [8960/10020]	Loss: 0.4723	LR: 0.020000
Training Epoch: 12 [9216/10020]	Loss: 0.5126	LR: 0.020000
Training Epoch: 12 [9472/10020]	Loss: 0.5146	LR: 0.020000
Training Epoch: 12 [9728/10020]	Loss: 0.4559	LR: 0.020000
Training Epoch: 12 [9984/10020]	Loss: 0.4632	LR: 0.020000
Training Epoch: 12 [10020/10020]	Loss: 0.5079	LR: 0.020000
Epoch 12 - Average Train Loss: 0.5113, Train Accuracy: 0.7535
Epoch 12 training time consumed: 145.00s
Evaluating Network.....
Test set: Epoch: 12, Average loss: 0.0039, Accuracy: 0.5545, Time consumed:8.07s
Training Epoch: 13 [256/10020]	Loss: 0.5227	LR: 0.020000
Training Epoch: 13 [512/10020]	Loss: 0.4702	LR: 0.020000
Training Epoch: 13 [768/10020]	Loss: 0.4685	LR: 0.020000
Training Epoch: 13 [1024/10020]	Loss: 0.4421	LR: 0.020000
Training Epoch: 13 [1280/10020]	Loss: 0.4498	LR: 0.020000
Training Epoch: 13 [1536/10020]	Loss: 0.4652	LR: 0.020000
Training Epoch: 13 [1792/10020]	Loss: 0.5814	LR: 0.020000
Training Epoch: 13 [2048/10020]	Loss: 0.4824	LR: 0.020000
Training Epoch: 13 [2304/10020]	Loss: 0.4355	LR: 0.020000
Training Epoch: 13 [2560/10020]	Loss: 0.4830	LR: 0.020000
Training Epoch: 13 [2816/10020]	Loss: 0.4242	LR: 0.020000
Training Epoch: 13 [3072/10020]	Loss: 0.5141	LR: 0.020000
Training Epoch: 13 [3328/10020]	Loss: 0.4929	LR: 0.020000
Training Epoch: 13 [3584/10020]	Loss: 0.4589	LR: 0.020000
Training Epoch: 13 [3840/10020]	Loss: 0.5206	LR: 0.020000
Training Epoch: 13 [4096/10020]	Loss: 0.4102	LR: 0.020000
Training Epoch: 13 [4352/10020]	Loss: 0.4514	LR: 0.020000
Training Epoch: 13 [4608/10020]	Loss: 0.4636	LR: 0.020000
Training Epoch: 13 [4864/10020]	Loss: 0.4583	LR: 0.020000
Training Epoch: 13 [5120/10020]	Loss: 0.4730	LR: 0.020000
Training Epoch: 13 [5376/10020]	Loss: 0.4656	LR: 0.020000
Training Epoch: 13 [5632/10020]	Loss: 0.4697	LR: 0.020000
Training Epoch: 13 [5888/10020]	Loss: 0.4190	LR: 0.020000
Training Epoch: 13 [6144/10020]	Loss: 0.4971	LR: 0.020000
Training Epoch: 13 [6400/10020]	Loss: 0.4196	LR: 0.020000
Training Epoch: 13 [6656/10020]	Loss: 0.4472	LR: 0.020000
Training Epoch: 13 [6912/10020]	Loss: 0.4354	LR: 0.020000
Training Epoch: 13 [7168/10020]	Loss: 0.4865	LR: 0.020000
Training Epoch: 13 [7424/10020]	Loss: 0.4518	LR: 0.020000
Training Epoch: 13 [7680/10020]	Loss: 0.4373	LR: 0.020000
Training Epoch: 13 [7936/10020]	Loss: 0.4712	LR: 0.020000
Training Epoch: 13 [8192/10020]	Loss: 0.4127	LR: 0.020000
Training Epoch: 13 [8448/10020]	Loss: 0.5002	LR: 0.020000
Training Epoch: 13 [8704/10020]	Loss: 0.4556	LR: 0.020000
Training Epoch: 13 [8960/10020]	Loss: 0.4955	LR: 0.020000
Training Epoch: 13 [9216/10020]	Loss: 0.3893	LR: 0.020000
Training Epoch: 13 [9472/10020]	Loss: 0.4521	LR: 0.020000
Training Epoch: 13 [9728/10020]	Loss: 0.4720	LR: 0.020000
Training Epoch: 13 [9984/10020]	Loss: 0.4192	LR: 0.020000
Training Epoch: 13 [10020/10020]	Loss: 0.5516	LR: 0.020000
Epoch 13 - Average Train Loss: 0.4635, Train Accuracy: 0.7844
Epoch 13 training time consumed: 144.64s
Evaluating Network.....
Test set: Epoch: 13, Average loss: 0.0029, Accuracy: 0.6397, Time consumed:8.26s
Training Epoch: 14 [256/10020]	Loss: 0.4654	LR: 0.020000
Training Epoch: 14 [512/10020]	Loss: 0.5005	LR: 0.020000
Training Epoch: 14 [768/10020]	Loss: 0.4883	LR: 0.020000
Training Epoch: 14 [1024/10020]	Loss: 0.4310	LR: 0.020000
Training Epoch: 14 [1280/10020]	Loss: 0.3880	LR: 0.020000
Training Epoch: 14 [1536/10020]	Loss: 0.4028	LR: 0.020000
Training Epoch: 14 [1792/10020]	Loss: 0.4301	LR: 0.020000
Training Epoch: 14 [2048/10020]	Loss: 0.4690	LR: 0.020000
Training Epoch: 14 [2304/10020]	Loss: 0.4964	LR: 0.020000
Training Epoch: 14 [2560/10020]	Loss: 0.3778	LR: 0.020000
Training Epoch: 14 [2816/10020]	Loss: 0.4388	LR: 0.020000
Training Epoch: 14 [3072/10020]	Loss: 0.4539	LR: 0.020000
Training Epoch: 14 [3328/10020]	Loss: 0.4109	LR: 0.020000
Training Epoch: 14 [3584/10020]	Loss: 0.3992	LR: 0.020000
Training Epoch: 14 [3840/10020]	Loss: 0.4971	LR: 0.020000
Training Epoch: 14 [4096/10020]	Loss: 0.4528	LR: 0.020000
Training Epoch: 14 [4352/10020]	Loss: 0.4115	LR: 0.020000
Training Epoch: 14 [4608/10020]	Loss: 0.4640	LR: 0.020000
Training Epoch: 14 [4864/10020]	Loss: 0.3731	LR: 0.020000
Training Epoch: 14 [5120/10020]	Loss: 0.4498	LR: 0.020000
Training Epoch: 14 [5376/10020]	Loss: 0.4455	LR: 0.020000
Training Epoch: 14 [5632/10020]	Loss: 0.4889	LR: 0.020000
Training Epoch: 14 [5888/10020]	Loss: 0.4017	LR: 0.020000
Training Epoch: 14 [6144/10020]	Loss: 0.4941	LR: 0.020000
Training Epoch: 14 [6400/10020]	Loss: 0.3985	LR: 0.020000
Training Epoch: 14 [6656/10020]	Loss: 0.3949	LR: 0.020000
Training Epoch: 14 [6912/10020]	Loss: 0.4088	LR: 0.020000
Training Epoch: 14 [7168/10020]	Loss: 0.4306	LR: 0.020000
Training Epoch: 14 [7424/10020]	Loss: 0.4145	LR: 0.020000
Training Epoch: 14 [7680/10020]	Loss: 0.4529	LR: 0.020000
Training Epoch: 14 [7936/10020]	Loss: 0.4470	LR: 0.020000
Training Epoch: 14 [8192/10020]	Loss: 0.4235	LR: 0.020000
Training Epoch: 14 [8448/10020]	Loss: 0.3813	LR: 0.020000
Training Epoch: 14 [8704/10020]	Loss: 0.4374	LR: 0.020000
Training Epoch: 14 [8960/10020]	Loss: 0.3802	LR: 0.020000
Training Epoch: 14 [9216/10020]	Loss: 0.3953	LR: 0.020000
Training Epoch: 14 [9472/10020]	Loss: 0.4631	LR: 0.020000
Training Epoch: 14 [9728/10020]	Loss: 0.3505	LR: 0.020000
Training Epoch: 14 [9984/10020]	Loss: 0.4070	LR: 0.020000
Training Epoch: 14 [10020/10020]	Loss: 0.5510	LR: 0.020000
Epoch 14 - Average Train Loss: 0.4316, Train Accuracy: 0.8053
Epoch 14 training time consumed: 144.87s
Evaluating Network.....
Test set: Epoch: 14, Average loss: 0.0029, Accuracy: 0.6804, Time consumed:7.89s
Training Epoch: 15 [256/10020]	Loss: 0.4460	LR: 0.020000
Training Epoch: 15 [512/10020]	Loss: 0.4437	LR: 0.020000
Training Epoch: 15 [768/10020]	Loss: 0.4347	LR: 0.020000
Training Epoch: 15 [1024/10020]	Loss: 0.4031	LR: 0.020000
Training Epoch: 15 [1280/10020]	Loss: 0.4515	LR: 0.020000
Training Epoch: 15 [1536/10020]	Loss: 0.4277	LR: 0.020000
Training Epoch: 15 [1792/10020]	Loss: 0.4581	LR: 0.020000
Training Epoch: 15 [2048/10020]	Loss: 0.3936	LR: 0.020000
Training Epoch: 15 [2304/10020]	Loss: 0.4160	LR: 0.020000
Training Epoch: 15 [2560/10020]	Loss: 0.3697	LR: 0.020000
Training Epoch: 15 [2816/10020]	Loss: 0.4045	LR: 0.020000
Training Epoch: 15 [3072/10020]	Loss: 0.4607	LR: 0.020000
Training Epoch: 15 [3328/10020]	Loss: 0.3926	LR: 0.020000
Training Epoch: 15 [3584/10020]	Loss: 0.4058	LR: 0.020000
Training Epoch: 15 [3840/10020]	Loss: 0.3717	LR: 0.020000
Training Epoch: 15 [4096/10020]	Loss: 0.4023	LR: 0.020000
Training Epoch: 15 [4352/10020]	Loss: 0.4670	LR: 0.020000
Training Epoch: 15 [4608/10020]	Loss: 0.4055	LR: 0.020000
Training Epoch: 15 [4864/10020]	Loss: 0.3586	LR: 0.020000
Training Epoch: 15 [5120/10020]	Loss: 0.3497	LR: 0.020000
Training Epoch: 15 [5376/10020]	Loss: 0.4555	LR: 0.020000
Training Epoch: 15 [5632/10020]	Loss: 0.4062	LR: 0.020000
Training Epoch: 15 [5888/10020]	Loss: 0.4267	LR: 0.020000
Training Epoch: 15 [6144/10020]	Loss: 0.3932	LR: 0.020000
Training Epoch: 15 [6400/10020]	Loss: 0.4158	LR: 0.020000
Training Epoch: 15 [6656/10020]	Loss: 0.4447	LR: 0.020000
Training Epoch: 15 [6912/10020]	Loss: 0.3750	LR: 0.020000
Training Epoch: 15 [7168/10020]	Loss: 0.3781	LR: 0.020000
Training Epoch: 15 [7424/10020]	Loss: 0.3055	LR: 0.020000
Training Epoch: 15 [7680/10020]	Loss: 0.4136	LR: 0.020000
Training Epoch: 15 [7936/10020]	Loss: 0.3473	LR: 0.020000
Training Epoch: 15 [8192/10020]	Loss: 0.3947	LR: 0.020000
Training Epoch: 15 [8448/10020]	Loss: 0.3937	LR: 0.020000
Training Epoch: 15 [8704/10020]	Loss: 0.3531	LR: 0.020000
Training Epoch: 15 [8960/10020]	Loss: 0.3995	LR: 0.020000
Training Epoch: 15 [9216/10020]	Loss: 0.3708	LR: 0.020000
Training Epoch: 15 [9472/10020]	Loss: 0.3883	LR: 0.020000
Training Epoch: 15 [9728/10020]	Loss: 0.3993	LR: 0.020000
Training Epoch: 15 [9984/10020]	Loss: 0.4130	LR: 0.020000
Training Epoch: 15 [10020/10020]	Loss: 0.4447	LR: 0.020000
Epoch 15 - Average Train Loss: 0.4036, Train Accuracy: 0.8197
Epoch 15 training time consumed: 144.25s
Evaluating Network.....
Test set: Epoch: 15, Average loss: 0.0053, Accuracy: 0.5550, Time consumed:8.06s
Training Epoch: 16 [256/10020]	Loss: 0.3968	LR: 0.020000
Training Epoch: 16 [512/10020]	Loss: 0.4215	LR: 0.020000
Training Epoch: 16 [768/10020]	Loss: 0.4475	LR: 0.020000
Training Epoch: 16 [1024/10020]	Loss: 0.3609	LR: 0.020000
Training Epoch: 16 [1280/10020]	Loss: 0.3962	LR: 0.020000
Training Epoch: 16 [1536/10020]	Loss: 0.3393	LR: 0.020000
Training Epoch: 16 [1792/10020]	Loss: 0.3922	LR: 0.020000
Training Epoch: 16 [2048/10020]	Loss: 0.3926	LR: 0.020000
Training Epoch: 16 [2304/10020]	Loss: 0.3341	LR: 0.020000
Training Epoch: 16 [2560/10020]	Loss: 0.3378	LR: 0.020000
Training Epoch: 16 [2816/10020]	Loss: 0.3639	LR: 0.020000
Training Epoch: 16 [3072/10020]	Loss: 0.4097	LR: 0.020000
Training Epoch: 16 [3328/10020]	Loss: 0.4000	LR: 0.020000
Training Epoch: 16 [3584/10020]	Loss: 0.3607	LR: 0.020000
Training Epoch: 16 [3840/10020]	Loss: 0.3615	LR: 0.020000
Training Epoch: 16 [4096/10020]	Loss: 0.3730	LR: 0.020000
Training Epoch: 16 [4352/10020]	Loss: 0.3544	LR: 0.020000
Training Epoch: 16 [4608/10020]	Loss: 0.4347	LR: 0.020000
Training Epoch: 16 [4864/10020]	Loss: 0.3347	LR: 0.020000
Training Epoch: 16 [5120/10020]	Loss: 0.3631	LR: 0.020000
Training Epoch: 16 [5376/10020]	Loss: 0.3401	LR: 0.020000
Training Epoch: 16 [5632/10020]	Loss: 0.3701	LR: 0.020000
Training Epoch: 16 [5888/10020]	Loss: 0.3870	LR: 0.020000
Training Epoch: 16 [6144/10020]	Loss: 0.3424	LR: 0.020000
Training Epoch: 16 [6400/10020]	Loss: 0.4832	LR: 0.020000
Training Epoch: 16 [6656/10020]	Loss: 0.3431	LR: 0.020000
Training Epoch: 16 [6912/10020]	Loss: 0.3872	LR: 0.020000
Training Epoch: 16 [7168/10020]	Loss: 0.3582	LR: 0.020000
Training Epoch: 16 [7424/10020]	Loss: 0.3465	LR: 0.020000
Training Epoch: 16 [7680/10020]	Loss: 0.3864	LR: 0.020000
Training Epoch: 16 [7936/10020]	Loss: 0.3739	LR: 0.020000
Training Epoch: 16 [8192/10020]	Loss: 0.3598	LR: 0.020000
Training Epoch: 16 [8448/10020]	Loss: 0.3416	LR: 0.020000
Training Epoch: 16 [8704/10020]	Loss: 0.4123	LR: 0.020000
Training Epoch: 16 [8960/10020]	Loss: 0.3284	LR: 0.020000
Training Epoch: 16 [9216/10020]	Loss: 0.3348	LR: 0.020000
Training Epoch: 16 [9472/10020]	Loss: 0.3640	LR: 0.020000
Training Epoch: 16 [9728/10020]	Loss: 0.5132	LR: 0.020000
Training Epoch: 16 [9984/10020]	Loss: 0.3401	LR: 0.020000
Training Epoch: 16 [10020/10020]	Loss: 0.2952	LR: 0.020000
Epoch 16 - Average Train Loss: 0.3763, Train Accuracy: 0.8370
Epoch 16 training time consumed: 144.89s
Evaluating Network.....
Test set: Epoch: 16, Average loss: 0.0030, Accuracy: 0.7274, Time consumed:7.99s
Training Epoch: 17 [256/10020]	Loss: 0.3725	LR: 0.020000
Training Epoch: 17 [512/10020]	Loss: 0.4247	LR: 0.020000
Training Epoch: 17 [768/10020]	Loss: 0.4076	LR: 0.020000
Training Epoch: 17 [1024/10020]	Loss: 0.3504	LR: 0.020000
Training Epoch: 17 [1280/10020]	Loss: 0.3524	LR: 0.020000
Training Epoch: 17 [1536/10020]	Loss: 0.4566	LR: 0.020000
Training Epoch: 17 [1792/10020]	Loss: 0.4058	LR: 0.020000
Training Epoch: 17 [2048/10020]	Loss: 0.3974	LR: 0.020000
Training Epoch: 17 [2304/10020]	Loss: 0.3858	LR: 0.020000
Training Epoch: 17 [2560/10020]	Loss: 0.4057	LR: 0.020000
Training Epoch: 17 [2816/10020]	Loss: 0.3421	LR: 0.020000
Training Epoch: 17 [3072/10020]	Loss: 0.3952	LR: 0.020000
Training Epoch: 17 [3328/10020]	Loss: 0.3951	LR: 0.020000
Training Epoch: 17 [3584/10020]	Loss: 0.3651	LR: 0.020000
Training Epoch: 17 [3840/10020]	Loss: 0.3482	LR: 0.020000
Training Epoch: 17 [4096/10020]	Loss: 0.3836	LR: 0.020000
Training Epoch: 17 [4352/10020]	Loss: 0.4128	LR: 0.020000
Training Epoch: 17 [4608/10020]	Loss: 0.4411	LR: 0.020000
Training Epoch: 17 [4864/10020]	Loss: 0.4042	LR: 0.020000
Training Epoch: 17 [5120/10020]	Loss: 0.3374	LR: 0.020000
Training Epoch: 17 [5376/10020]	Loss: 0.3331	LR: 0.020000
Training Epoch: 17 [5632/10020]	Loss: 0.4117	LR: 0.020000
Training Epoch: 17 [5888/10020]	Loss: 0.3203	LR: 0.020000
Training Epoch: 17 [6144/10020]	Loss: 0.3059	LR: 0.020000
Training Epoch: 17 [6400/10020]	Loss: 0.3035	LR: 0.020000
Training Epoch: 17 [6656/10020]	Loss: 0.3952	LR: 0.020000
Training Epoch: 17 [6912/10020]	Loss: 0.3187	LR: 0.020000
Training Epoch: 17 [7168/10020]	Loss: 0.3496	LR: 0.020000
Training Epoch: 17 [7424/10020]	Loss: 0.3342	LR: 0.020000
Training Epoch: 17 [7680/10020]	Loss: 0.3111	LR: 0.020000
Training Epoch: 17 [7936/10020]	Loss: 0.3879	LR: 0.020000
Training Epoch: 17 [8192/10020]	Loss: 0.4451	LR: 0.020000
Training Epoch: 17 [8448/10020]	Loss: 0.3252	LR: 0.020000
Training Epoch: 17 [8704/10020]	Loss: 0.3685	LR: 0.020000
Training Epoch: 17 [8960/10020]	Loss: 0.3355	LR: 0.020000
Training Epoch: 17 [9216/10020]	Loss: 0.3817	LR: 0.020000
Training Epoch: 17 [9472/10020]	Loss: 0.2641	LR: 0.020000
Training Epoch: 17 [9728/10020]	Loss: 0.3058	LR: 0.020000
Training Epoch: 17 [9984/10020]	Loss: 0.2995	LR: 0.020000
Training Epoch: 17 [10020/10020]	Loss: 0.3609	LR: 0.020000
Epoch 17 - Average Train Loss: 0.3662, Train Accuracy: 0.8356
Epoch 17 training time consumed: 144.69s
Evaluating Network.....
Test set: Epoch: 17, Average loss: 0.0019, Accuracy: 0.8416, Time consumed:8.16s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-17-best.pth
Training Epoch: 18 [256/10020]	Loss: 0.3313	LR: 0.020000
Training Epoch: 18 [512/10020]	Loss: 0.3146	LR: 0.020000
Training Epoch: 18 [768/10020]	Loss: 0.3238	LR: 0.020000
Training Epoch: 18 [1024/10020]	Loss: 0.3534	LR: 0.020000
Training Epoch: 18 [1280/10020]	Loss: 0.4498	LR: 0.020000
Training Epoch: 18 [1536/10020]	Loss: 0.4182	LR: 0.020000
Training Epoch: 18 [1792/10020]	Loss: 0.3717	LR: 0.020000
Training Epoch: 18 [2048/10020]	Loss: 0.3656	LR: 0.020000
Training Epoch: 18 [2304/10020]	Loss: 0.3434	LR: 0.020000
Training Epoch: 18 [2560/10020]	Loss: 0.3384	LR: 0.020000
Training Epoch: 18 [2816/10020]	Loss: 0.3975	LR: 0.020000
Training Epoch: 18 [3072/10020]	Loss: 0.3370	LR: 0.020000
Training Epoch: 18 [3328/10020]	Loss: 0.3818	LR: 0.020000
Training Epoch: 18 [3584/10020]	Loss: 0.3709	LR: 0.020000
Training Epoch: 18 [3840/10020]	Loss: 0.3397	LR: 0.020000
Training Epoch: 18 [4096/10020]	Loss: 0.2951	LR: 0.020000
Training Epoch: 18 [4352/10020]	Loss: 0.2765	LR: 0.020000
Training Epoch: 18 [4608/10020]	Loss: 0.4073	LR: 0.020000
Training Epoch: 18 [4864/10020]	Loss: 0.3361	LR: 0.020000
Training Epoch: 18 [5120/10020]	Loss: 0.3263	LR: 0.020000
Training Epoch: 18 [5376/10020]	Loss: 0.2930	LR: 0.020000
Training Epoch: 18 [5632/10020]	Loss: 0.3564	LR: 0.020000
Training Epoch: 18 [5888/10020]	Loss: 0.3172	LR: 0.020000
Training Epoch: 18 [6144/10020]	Loss: 0.3061	LR: 0.020000
Training Epoch: 18 [6400/10020]	Loss: 0.2784	LR: 0.020000
Training Epoch: 18 [6656/10020]	Loss: 0.3766	LR: 0.020000
Training Epoch: 18 [6912/10020]	Loss: 0.3119	LR: 0.020000
Training Epoch: 18 [7168/10020]	Loss: 0.3432	LR: 0.020000
Training Epoch: 18 [7424/10020]	Loss: 0.3001	LR: 0.020000
Training Epoch: 18 [7680/10020]	Loss: 0.2727	LR: 0.020000
Training Epoch: 18 [7936/10020]	Loss: 0.3513	LR: 0.020000
Training Epoch: 18 [8192/10020]	Loss: 0.2836	LR: 0.020000
Training Epoch: 18 [8448/10020]	Loss: 0.2820	LR: 0.020000
Training Epoch: 18 [8704/10020]	Loss: 0.2921	LR: 0.020000
Training Epoch: 18 [8960/10020]	Loss: 0.3062	LR: 0.020000
Training Epoch: 18 [9216/10020]	Loss: 0.3465	LR: 0.020000
Training Epoch: 18 [9472/10020]	Loss: 0.3031	LR: 0.020000
Training Epoch: 18 [9728/10020]	Loss: 0.2829	LR: 0.020000
Training Epoch: 18 [9984/10020]	Loss: 0.3203	LR: 0.020000
Training Epoch: 18 [10020/10020]	Loss: 0.2063	LR: 0.020000
Epoch 18 - Average Train Loss: 0.3329, Train Accuracy: 0.8572
Epoch 18 training time consumed: 145.39s
Evaluating Network.....
Test set: Epoch: 18, Average loss: 0.0013, Accuracy: 0.8683, Time consumed:7.95s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-18-best.pth
Training Epoch: 19 [256/10020]	Loss: 0.3376	LR: 0.020000
Training Epoch: 19 [512/10020]	Loss: 0.3041	LR: 0.020000
Training Epoch: 19 [768/10020]	Loss: 0.2980	LR: 0.020000
Training Epoch: 19 [1024/10020]	Loss: 0.3100	LR: 0.020000
Training Epoch: 19 [1280/10020]	Loss: 0.2936	LR: 0.020000
Training Epoch: 19 [1536/10020]	Loss: 0.2844	LR: 0.020000
Training Epoch: 19 [1792/10020]	Loss: 0.3054	LR: 0.020000
Training Epoch: 19 [2048/10020]	Loss: 0.2599	LR: 0.020000
Training Epoch: 19 [2304/10020]	Loss: 0.3160	LR: 0.020000
Training Epoch: 19 [2560/10020]	Loss: 0.3157	LR: 0.020000
Training Epoch: 19 [2816/10020]	Loss: 0.2822	LR: 0.020000
Training Epoch: 19 [3072/10020]	Loss: 0.3574	LR: 0.020000
Training Epoch: 19 [3328/10020]	Loss: 0.2976	LR: 0.020000
Training Epoch: 19 [3584/10020]	Loss: 0.2468	LR: 0.020000
Training Epoch: 19 [3840/10020]	Loss: 0.2497	LR: 0.020000
Training Epoch: 19 [4096/10020]	Loss: 0.2495	LR: 0.020000
Training Epoch: 19 [4352/10020]	Loss: 0.2821	LR: 0.020000
Training Epoch: 19 [4608/10020]	Loss: 0.2605	LR: 0.020000
Training Epoch: 19 [4864/10020]	Loss: 0.2597	LR: 0.020000
Training Epoch: 19 [5120/10020]	Loss: 0.2735	LR: 0.020000
Training Epoch: 19 [5376/10020]	Loss: 0.2772	LR: 0.020000
Training Epoch: 19 [5632/10020]	Loss: 0.2651	LR: 0.020000
Training Epoch: 19 [5888/10020]	Loss: 0.2839	LR: 0.020000
Training Epoch: 19 [6144/10020]	Loss: 0.2767	LR: 0.020000
Training Epoch: 19 [6400/10020]	Loss: 0.2756	LR: 0.020000
Training Epoch: 19 [6656/10020]	Loss: 0.2614	LR: 0.020000
Training Epoch: 19 [6912/10020]	Loss: 0.2904	LR: 0.020000
Training Epoch: 19 [7168/10020]	Loss: 0.2948	LR: 0.020000
Training Epoch: 19 [7424/10020]	Loss: 0.2289	LR: 0.020000
Training Epoch: 19 [7680/10020]	Loss: 0.2619	LR: 0.020000
Training Epoch: 19 [7936/10020]	Loss: 0.2152	LR: 0.020000
Training Epoch: 19 [8192/10020]	Loss: 0.2866	LR: 0.020000
Training Epoch: 19 [8448/10020]	Loss: 0.2978	LR: 0.020000
Training Epoch: 19 [8704/10020]	Loss: 0.2926	LR: 0.020000
Training Epoch: 19 [8960/10020]	Loss: 0.2193	LR: 0.020000
Training Epoch: 19 [9216/10020]	Loss: 0.3063	LR: 0.020000
Training Epoch: 19 [9472/10020]	Loss: 0.3446	LR: 0.020000
Training Epoch: 19 [9728/10020]	Loss: 0.2640	LR: 0.020000
Training Epoch: 19 [9984/10020]	Loss: 0.4000	LR: 0.020000
Training Epoch: 19 [10020/10020]	Loss: 0.5394	LR: 0.020000
Epoch 19 - Average Train Loss: 0.2862, Train Accuracy: 0.8776
Epoch 19 training time consumed: 144.26s
Evaluating Network.....
Test set: Epoch: 19, Average loss: 0.0015, Accuracy: 0.8712, Time consumed:8.29s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-19-best.pth
Training Epoch: 20 [256/10020]	Loss: 0.3489	LR: 0.004000
Training Epoch: 20 [512/10020]	Loss: 0.2890	LR: 0.004000
Training Epoch: 20 [768/10020]	Loss: 0.3043	LR: 0.004000
Training Epoch: 20 [1024/10020]	Loss: 0.3446	LR: 0.004000
Training Epoch: 20 [1280/10020]	Loss: 0.3003	LR: 0.004000
Training Epoch: 20 [1536/10020]	Loss: 0.3508	LR: 0.004000
Training Epoch: 20 [1792/10020]	Loss: 0.3253	LR: 0.004000
Training Epoch: 20 [2048/10020]	Loss: 0.2773	LR: 0.004000
Training Epoch: 20 [2304/10020]	Loss: 0.2828	LR: 0.004000
Training Epoch: 20 [2560/10020]	Loss: 0.3388	LR: 0.004000
Training Epoch: 20 [2816/10020]	Loss: 0.2834	LR: 0.004000
Training Epoch: 20 [3072/10020]	Loss: 0.2994	LR: 0.004000
Training Epoch: 20 [3328/10020]	Loss: 0.2777	LR: 0.004000
Training Epoch: 20 [3584/10020]	Loss: 0.2448	LR: 0.004000
Training Epoch: 20 [3840/10020]	Loss: 0.1890	LR: 0.004000
Training Epoch: 20 [4096/10020]	Loss: 0.2549	LR: 0.004000
Training Epoch: 20 [4352/10020]	Loss: 0.2494	LR: 0.004000
Training Epoch: 20 [4608/10020]	Loss: 0.2657	LR: 0.004000
Training Epoch: 20 [4864/10020]	Loss: 0.3883	LR: 0.004000
Training Epoch: 20 [5120/10020]	Loss: 0.2957	LR: 0.004000
Training Epoch: 20 [5376/10020]	Loss: 0.2949	LR: 0.004000
Training Epoch: 20 [5632/10020]	Loss: 0.2655	LR: 0.004000
Training Epoch: 20 [5888/10020]	Loss: 0.2963	LR: 0.004000
Training Epoch: 20 [6144/10020]	Loss: 0.2606	LR: 0.004000
Training Epoch: 20 [6400/10020]	Loss: 0.2603	LR: 0.004000
Training Epoch: 20 [6656/10020]	Loss: 0.2142	LR: 0.004000
Training Epoch: 20 [6912/10020]	Loss: 0.2574	LR: 0.004000
Training Epoch: 20 [7168/10020]	Loss: 0.2175	LR: 0.004000
Training Epoch: 20 [7424/10020]	Loss: 0.2295	LR: 0.004000
Training Epoch: 20 [7680/10020]	Loss: 0.2492	LR: 0.004000
Training Epoch: 20 [7936/10020]	Loss: 0.2652	LR: 0.004000
Training Epoch: 20 [8192/10020]	Loss: 0.2906	LR: 0.004000
Training Epoch: 20 [8448/10020]	Loss: 0.2224	LR: 0.004000
Training Epoch: 20 [8704/10020]	Loss: 0.2203	LR: 0.004000
Training Epoch: 20 [8960/10020]	Loss: 0.2091	LR: 0.004000
Training Epoch: 20 [9216/10020]	Loss: 0.2117	LR: 0.004000
Training Epoch: 20 [9472/10020]	Loss: 0.2092	LR: 0.004000
Training Epoch: 20 [9728/10020]	Loss: 0.2469	LR: 0.004000
Training Epoch: 20 [9984/10020]	Loss: 0.2003	LR: 0.004000
Training Epoch: 20 [10020/10020]	Loss: 0.2811	LR: 0.004000
Epoch 20 - Average Train Loss: 0.2701, Train Accuracy: 0.8817
Epoch 20 training time consumed: 144.87s
Evaluating Network.....
Test set: Epoch: 20, Average loss: 0.0010, Accuracy: 0.8998, Time consumed:8.08s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-20-best.pth
Training Epoch: 21 [256/10020]	Loss: 0.2561	LR: 0.004000
Training Epoch: 21 [512/10020]	Loss: 0.2511	LR: 0.004000
Training Epoch: 21 [768/10020]	Loss: 0.2467	LR: 0.004000
Training Epoch: 21 [1024/10020]	Loss: 0.1941	LR: 0.004000
Training Epoch: 21 [1280/10020]	Loss: 0.2346	LR: 0.004000
Training Epoch: 21 [1536/10020]	Loss: 0.2137	LR: 0.004000
Training Epoch: 21 [1792/10020]	Loss: 0.2220	LR: 0.004000
Training Epoch: 21 [2048/10020]	Loss: 0.2225	LR: 0.004000
Training Epoch: 21 [2304/10020]	Loss: 0.2536	LR: 0.004000
Training Epoch: 21 [2560/10020]	Loss: 0.2515	LR: 0.004000
Training Epoch: 21 [2816/10020]	Loss: 0.2982	LR: 0.004000
Training Epoch: 21 [3072/10020]	Loss: 0.2449	LR: 0.004000
Training Epoch: 21 [3328/10020]	Loss: 0.2016	LR: 0.004000
Training Epoch: 21 [3584/10020]	Loss: 0.2294	LR: 0.004000
Training Epoch: 21 [3840/10020]	Loss: 0.2325	LR: 0.004000
Training Epoch: 21 [4096/10020]	Loss: 0.2337	LR: 0.004000
Training Epoch: 21 [4352/10020]	Loss: 0.2467	LR: 0.004000
Training Epoch: 21 [4608/10020]	Loss: 0.2438	LR: 0.004000
Training Epoch: 21 [4864/10020]	Loss: 0.2356	LR: 0.004000
Training Epoch: 21 [5120/10020]	Loss: 0.1933	LR: 0.004000
Training Epoch: 21 [5376/10020]	Loss: 0.2158	LR: 0.004000
Training Epoch: 21 [5632/10020]	Loss: 0.2030	LR: 0.004000
Training Epoch: 21 [5888/10020]	Loss: 0.2266	LR: 0.004000
Training Epoch: 21 [6144/10020]	Loss: 0.2402	LR: 0.004000
Training Epoch: 21 [6400/10020]	Loss: 0.2258	LR: 0.004000
Training Epoch: 21 [6656/10020]	Loss: 0.2914	LR: 0.004000
Training Epoch: 21 [6912/10020]	Loss: 0.2447	LR: 0.004000
Training Epoch: 21 [7168/10020]	Loss: 0.2372	LR: 0.004000
Training Epoch: 21 [7424/10020]	Loss: 0.2271	LR: 0.004000
Training Epoch: 21 [7680/10020]	Loss: 0.2767	LR: 0.004000
Training Epoch: 21 [7936/10020]	Loss: 0.2501	LR: 0.004000
Training Epoch: 21 [8192/10020]	Loss: 0.2025	LR: 0.004000
Training Epoch: 21 [8448/10020]	Loss: 0.2884	LR: 0.004000
Training Epoch: 21 [8704/10020]	Loss: 0.2141	LR: 0.004000
Training Epoch: 21 [8960/10020]	Loss: 0.1624	LR: 0.004000
Training Epoch: 21 [9216/10020]	Loss: 0.2170	LR: 0.004000
Training Epoch: 21 [9472/10020]	Loss: 0.2310	LR: 0.004000
Training Epoch: 21 [9728/10020]	Loss: 0.2398	LR: 0.004000
Training Epoch: 21 [9984/10020]	Loss: 0.1912	LR: 0.004000
Training Epoch: 21 [10020/10020]	Loss: 0.2233	LR: 0.004000
Epoch 21 - Average Train Loss: 0.2331, Train Accuracy: 0.9033
Epoch 21 training time consumed: 144.61s
Evaluating Network.....
Test set: Epoch: 21, Average loss: 0.0009, Accuracy: 0.9104, Time consumed:8.09s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-21-best.pth
Training Epoch: 22 [256/10020]	Loss: 0.2482	LR: 0.004000
Training Epoch: 22 [512/10020]	Loss: 0.1778	LR: 0.004000
Training Epoch: 22 [768/10020]	Loss: 0.2702	LR: 0.004000
Training Epoch: 22 [1024/10020]	Loss: 0.2811	LR: 0.004000
Training Epoch: 22 [1280/10020]	Loss: 0.2346	LR: 0.004000
Training Epoch: 22 [1536/10020]	Loss: 0.2072	LR: 0.004000
Training Epoch: 22 [1792/10020]	Loss: 0.1857	LR: 0.004000
Training Epoch: 22 [2048/10020]	Loss: 0.2113	LR: 0.004000
Training Epoch: 22 [2304/10020]	Loss: 0.2516	LR: 0.004000
Training Epoch: 22 [2560/10020]	Loss: 0.2946	LR: 0.004000
Training Epoch: 22 [2816/10020]	Loss: 0.2443	LR: 0.004000
Training Epoch: 22 [3072/10020]	Loss: 0.2579	LR: 0.004000
Training Epoch: 22 [3328/10020]	Loss: 0.2412	LR: 0.004000
Training Epoch: 22 [3584/10020]	Loss: 0.2145	LR: 0.004000
Training Epoch: 22 [3840/10020]	Loss: 0.2445	LR: 0.004000
Training Epoch: 22 [4096/10020]	Loss: 0.2649	LR: 0.004000
Training Epoch: 22 [4352/10020]	Loss: 0.2435	LR: 0.004000
Training Epoch: 22 [4608/10020]	Loss: 0.2093	LR: 0.004000
Training Epoch: 22 [4864/10020]	Loss: 0.2571	LR: 0.004000
Training Epoch: 22 [5120/10020]	Loss: 0.2169	LR: 0.004000
Training Epoch: 22 [5376/10020]	Loss: 0.1901	LR: 0.004000
Training Epoch: 22 [5632/10020]	Loss: 0.1931	LR: 0.004000
Training Epoch: 22 [5888/10020]	Loss: 0.2016	LR: 0.004000
Training Epoch: 22 [6144/10020]	Loss: 0.2675	LR: 0.004000
Training Epoch: 22 [6400/10020]	Loss: 0.2087	LR: 0.004000
Training Epoch: 22 [6656/10020]	Loss: 0.2226	LR: 0.004000
Training Epoch: 22 [6912/10020]	Loss: 0.2302	LR: 0.004000
Training Epoch: 22 [7168/10020]	Loss: 0.1655	LR: 0.004000
Training Epoch: 22 [7424/10020]	Loss: 0.2572	LR: 0.004000
Training Epoch: 22 [7680/10020]	Loss: 0.2222	LR: 0.004000
Training Epoch: 22 [7936/10020]	Loss: 0.2171	LR: 0.004000
Training Epoch: 22 [8192/10020]	Loss: 0.2134	LR: 0.004000
Training Epoch: 22 [8448/10020]	Loss: 0.2372	LR: 0.004000
Training Epoch: 22 [8704/10020]	Loss: 0.2830	LR: 0.004000
Training Epoch: 22 [8960/10020]	Loss: 0.2394	LR: 0.004000
Training Epoch: 22 [9216/10020]	Loss: 0.1843	LR: 0.004000
Training Epoch: 22 [9472/10020]	Loss: 0.1878	LR: 0.004000
Training Epoch: 22 [9728/10020]	Loss: 0.1798	LR: 0.004000
Training Epoch: 22 [9984/10020]	Loss: 0.2835	LR: 0.004000
Training Epoch: 22 [10020/10020]	Loss: 0.0767	LR: 0.004000
Epoch 22 - Average Train Loss: 0.2287, Train Accuracy: 0.9056
Epoch 22 training time consumed: 144.81s
Evaluating Network.....
Test set: Epoch: 22, Average loss: 0.0011, Accuracy: 0.8969, Time consumed:8.14s
Training Epoch: 23 [256/10020]	Loss: 0.2417	LR: 0.004000
Training Epoch: 23 [512/10020]	Loss: 0.2193	LR: 0.004000
Training Epoch: 23 [768/10020]	Loss: 0.2368	LR: 0.004000
Training Epoch: 23 [1024/10020]	Loss: 0.1693	LR: 0.004000
Training Epoch: 23 [1280/10020]	Loss: 0.2138	LR: 0.004000
Training Epoch: 23 [1536/10020]	Loss: 0.2060	LR: 0.004000
Training Epoch: 23 [1792/10020]	Loss: 0.1993	LR: 0.004000
Training Epoch: 23 [2048/10020]	Loss: 0.2121	LR: 0.004000
Training Epoch: 23 [2304/10020]	Loss: 0.1652	LR: 0.004000
Training Epoch: 23 [2560/10020]	Loss: 0.2088	LR: 0.004000
Training Epoch: 23 [2816/10020]	Loss: 0.2046	LR: 0.004000
Training Epoch: 23 [3072/10020]	Loss: 0.2044	LR: 0.004000
Training Epoch: 23 [3328/10020]	Loss: 0.2315	LR: 0.004000
Training Epoch: 23 [3584/10020]	Loss: 0.2237	LR: 0.004000
Training Epoch: 23 [3840/10020]	Loss: 0.2197	LR: 0.004000
Training Epoch: 23 [4096/10020]	Loss: 0.1863	LR: 0.004000
Training Epoch: 23 [4352/10020]	Loss: 0.1459	LR: 0.004000
Training Epoch: 23 [4608/10020]	Loss: 0.1861	LR: 0.004000
Training Epoch: 23 [4864/10020]	Loss: 0.1929	LR: 0.004000
Training Epoch: 23 [5120/10020]	Loss: 0.2162	LR: 0.004000
Training Epoch: 23 [5376/10020]	Loss: 0.2136	LR: 0.004000
Training Epoch: 23 [5632/10020]	Loss: 0.2436	LR: 0.004000
Training Epoch: 23 [5888/10020]	Loss: 0.1795	LR: 0.004000
Training Epoch: 23 [6144/10020]	Loss: 0.2483	LR: 0.004000
Training Epoch: 23 [6400/10020]	Loss: 0.2201	LR: 0.004000
Training Epoch: 23 [6656/10020]	Loss: 0.2342	LR: 0.004000
Training Epoch: 23 [6912/10020]	Loss: 0.2020	LR: 0.004000
Training Epoch: 23 [7168/10020]	Loss: 0.1717	LR: 0.004000
Training Epoch: 23 [7424/10020]	Loss: 0.2543	LR: 0.004000
Training Epoch: 23 [7680/10020]	Loss: 0.2596	LR: 0.004000
Training Epoch: 23 [7936/10020]	Loss: 0.2138	LR: 0.004000
Training Epoch: 23 [8192/10020]	Loss: 0.1820	LR: 0.004000
Training Epoch: 23 [8448/10020]	Loss: 0.2294	LR: 0.004000
Training Epoch: 23 [8704/10020]	Loss: 0.2491	LR: 0.004000
Training Epoch: 23 [8960/10020]	Loss: 0.2521	LR: 0.004000
Training Epoch: 23 [9216/10020]	Loss: 0.2522	LR: 0.004000
Training Epoch: 23 [9472/10020]	Loss: 0.1561	LR: 0.004000
Training Epoch: 23 [9728/10020]	Loss: 0.2211	LR: 0.004000
Training Epoch: 23 [9984/10020]	Loss: 0.2051	LR: 0.004000
Training Epoch: 23 [10020/10020]	Loss: 0.1797	LR: 0.004000
Epoch 23 - Average Train Loss: 0.2120, Train Accuracy: 0.9121
Epoch 23 training time consumed: 144.86s
Evaluating Network.....
Test set: Epoch: 23, Average loss: 0.0008, Accuracy: 0.9104, Time consumed:8.07s
Training Epoch: 24 [256/10020]	Loss: 0.1889	LR: 0.004000
Training Epoch: 24 [512/10020]	Loss: 0.1849	LR: 0.004000
Training Epoch: 24 [768/10020]	Loss: 0.1807	LR: 0.004000
Training Epoch: 24 [1024/10020]	Loss: 0.2182	LR: 0.004000
Training Epoch: 24 [1280/10020]	Loss: 0.2299	LR: 0.004000
Training Epoch: 24 [1536/10020]	Loss: 0.2381	LR: 0.004000
Training Epoch: 24 [1792/10020]	Loss: 0.2292	LR: 0.004000
Training Epoch: 24 [2048/10020]	Loss: 0.1688	LR: 0.004000
Training Epoch: 24 [2304/10020]	Loss: 0.2014	LR: 0.004000
Training Epoch: 24 [2560/10020]	Loss: 0.2068	LR: 0.004000
Training Epoch: 24 [2816/10020]	Loss: 0.1748	LR: 0.004000
Training Epoch: 24 [3072/10020]	Loss: 0.1769	LR: 0.004000
Training Epoch: 24 [3328/10020]	Loss: 0.2077	LR: 0.004000
Training Epoch: 24 [3584/10020]	Loss: 0.2024	LR: 0.004000
Training Epoch: 24 [3840/10020]	Loss: 0.2273	LR: 0.004000
Training Epoch: 24 [4096/10020]	Loss: 0.1763	LR: 0.004000
Training Epoch: 24 [4352/10020]	Loss: 0.1549	LR: 0.004000
Training Epoch: 24 [4608/10020]	Loss: 0.2056	LR: 0.004000
Training Epoch: 24 [4864/10020]	Loss: 0.1756	LR: 0.004000
Training Epoch: 24 [5120/10020]	Loss: 0.2146	LR: 0.004000
Training Epoch: 24 [5376/10020]	Loss: 0.2196	LR: 0.004000
Training Epoch: 24 [5632/10020]	Loss: 0.2210	LR: 0.004000
Training Epoch: 24 [5888/10020]	Loss: 0.2344	LR: 0.004000
Training Epoch: 24 [6144/10020]	Loss: 0.2569	LR: 0.004000
Training Epoch: 24 [6400/10020]	Loss: 0.2731	LR: 0.004000
Training Epoch: 24 [6656/10020]	Loss: 0.2149	LR: 0.004000
Training Epoch: 24 [6912/10020]	Loss: 0.1828	LR: 0.004000
Training Epoch: 24 [7168/10020]	Loss: 0.2882	LR: 0.004000
Training Epoch: 24 [7424/10020]	Loss: 0.2424	LR: 0.004000
Training Epoch: 24 [7680/10020]	Loss: 0.1877	LR: 0.004000
Training Epoch: 24 [7936/10020]	Loss: 0.2171	LR: 0.004000
Training Epoch: 24 [8192/10020]	Loss: 0.1940	LR: 0.004000
Training Epoch: 24 [8448/10020]	Loss: 0.2072	LR: 0.004000
Training Epoch: 24 [8704/10020]	Loss: 0.2546	LR: 0.004000
Training Epoch: 24 [8960/10020]	Loss: 0.2271	LR: 0.004000
Training Epoch: 24 [9216/10020]	Loss: 0.1798	LR: 0.004000
Training Epoch: 24 [9472/10020]	Loss: 0.1984	LR: 0.004000
Training Epoch: 24 [9728/10020]	Loss: 0.1712	LR: 0.004000
Training Epoch: 24 [9984/10020]	Loss: 0.2011	LR: 0.004000
Training Epoch: 24 [10020/10020]	Loss: 0.1057	LR: 0.004000
Epoch 24 - Average Train Loss: 0.2082, Train Accuracy: 0.9131
Epoch 24 training time consumed: 144.99s
Evaluating Network.....
Test set: Epoch: 24, Average loss: 0.0008, Accuracy: 0.9133, Time consumed:8.10s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-24-best.pth
Training Epoch: 25 [256/10020]	Loss: 0.2400	LR: 0.004000
Training Epoch: 25 [512/10020]	Loss: 0.1823	LR: 0.004000
Training Epoch: 25 [768/10020]	Loss: 0.2712	LR: 0.004000
Training Epoch: 25 [1024/10020]	Loss: 0.2121	LR: 0.004000
Training Epoch: 25 [1280/10020]	Loss: 0.1787	LR: 0.004000
Training Epoch: 25 [1536/10020]	Loss: 0.2263	LR: 0.004000
Training Epoch: 25 [1792/10020]	Loss: 0.2550	LR: 0.004000
Training Epoch: 25 [2048/10020]	Loss: 0.1945	LR: 0.004000
Training Epoch: 25 [2304/10020]	Loss: 0.2143	LR: 0.004000
Training Epoch: 25 [2560/10020]	Loss: 0.1950	LR: 0.004000
Training Epoch: 25 [2816/10020]	Loss: 0.2063	LR: 0.004000
Training Epoch: 25 [3072/10020]	Loss: 0.1953	LR: 0.004000
Training Epoch: 25 [3328/10020]	Loss: 0.2255	LR: 0.004000
Training Epoch: 25 [3584/10020]	Loss: 0.1769	LR: 0.004000
Training Epoch: 25 [3840/10020]	Loss: 0.1916	LR: 0.004000
Training Epoch: 25 [4096/10020]	Loss: 0.1811	LR: 0.004000
Training Epoch: 25 [4352/10020]	Loss: 0.1697	LR: 0.004000
Training Epoch: 25 [4608/10020]	Loss: 0.2395	LR: 0.004000
Training Epoch: 25 [4864/10020]	Loss: 0.1510	LR: 0.004000
Training Epoch: 25 [5120/10020]	Loss: 0.2350	LR: 0.004000
Training Epoch: 25 [5376/10020]	Loss: 0.2084	LR: 0.004000
Training Epoch: 25 [5632/10020]	Loss: 0.1914	LR: 0.004000
Training Epoch: 25 [5888/10020]	Loss: 0.2021	LR: 0.004000
Training Epoch: 25 [6144/10020]	Loss: 0.1915	LR: 0.004000
Training Epoch: 25 [6400/10020]	Loss: 0.2265	LR: 0.004000
Training Epoch: 25 [6656/10020]	Loss: 0.1872	LR: 0.004000
Training Epoch: 25 [6912/10020]	Loss: 0.2231	LR: 0.004000
Training Epoch: 25 [7168/10020]	Loss: 0.1932	LR: 0.004000
Training Epoch: 25 [7424/10020]	Loss: 0.1996	LR: 0.004000
Training Epoch: 25 [7680/10020]	Loss: 0.1903	LR: 0.004000
Training Epoch: 25 [7936/10020]	Loss: 0.1609	LR: 0.004000
Training Epoch: 25 [8192/10020]	Loss: 0.2232	LR: 0.004000
Training Epoch: 25 [8448/10020]	Loss: 0.1248	LR: 0.004000
Training Epoch: 25 [8704/10020]	Loss: 0.2445	LR: 0.004000
Training Epoch: 25 [8960/10020]	Loss: 0.2266	LR: 0.004000
Training Epoch: 25 [9216/10020]	Loss: 0.2222	LR: 0.004000
Training Epoch: 25 [9472/10020]	Loss: 0.1811	LR: 0.004000
Training Epoch: 25 [9728/10020]	Loss: 0.1793	LR: 0.004000
Training Epoch: 25 [9984/10020]	Loss: 0.2053	LR: 0.004000
Training Epoch: 25 [10020/10020]	Loss: 0.2089	LR: 0.004000
Epoch 25 - Average Train Loss: 0.2032, Train Accuracy: 0.9158
Epoch 25 training time consumed: 145.01s
Evaluating Network.....
Test set: Epoch: 25, Average loss: 0.0010, Accuracy: 0.9056, Time consumed:7.87s
Training Epoch: 26 [256/10020]	Loss: 0.2671	LR: 0.004000
Training Epoch: 26 [512/10020]	Loss: 0.2046	LR: 0.004000
Training Epoch: 26 [768/10020]	Loss: 0.1894	LR: 0.004000
Training Epoch: 26 [1024/10020]	Loss: 0.1900	LR: 0.004000
Training Epoch: 26 [1280/10020]	Loss: 0.1823	LR: 0.004000
Training Epoch: 26 [1536/10020]	Loss: 0.2069	LR: 0.004000
Training Epoch: 26 [1792/10020]	Loss: 0.1737	LR: 0.004000
Training Epoch: 26 [2048/10020]	Loss: 0.1736	LR: 0.004000
Training Epoch: 26 [2304/10020]	Loss: 0.1889	LR: 0.004000
Training Epoch: 26 [2560/10020]	Loss: 0.2071	LR: 0.004000
Training Epoch: 26 [2816/10020]	Loss: 0.1570	LR: 0.004000
Training Epoch: 26 [3072/10020]	Loss: 0.2287	LR: 0.004000
Training Epoch: 26 [3328/10020]	Loss: 0.2296	LR: 0.004000
Training Epoch: 26 [3584/10020]	Loss: 0.1909	LR: 0.004000
Training Epoch: 26 [3840/10020]	Loss: 0.1714	LR: 0.004000
Training Epoch: 26 [4096/10020]	Loss: 0.1831	LR: 0.004000
Training Epoch: 26 [4352/10020]	Loss: 0.2080	LR: 0.004000
Training Epoch: 26 [4608/10020]	Loss: 0.2115	LR: 0.004000
Training Epoch: 26 [4864/10020]	Loss: 0.2033	LR: 0.004000
Training Epoch: 26 [5120/10020]	Loss: 0.2551	LR: 0.004000
Training Epoch: 26 [5376/10020]	Loss: 0.1669	LR: 0.004000
Training Epoch: 26 [5632/10020]	Loss: 0.2082	LR: 0.004000
Training Epoch: 26 [5888/10020]	Loss: 0.2515	LR: 0.004000
Training Epoch: 26 [6144/10020]	Loss: 0.1545	LR: 0.004000
Training Epoch: 26 [6400/10020]	Loss: 0.1593	LR: 0.004000
Training Epoch: 26 [6656/10020]	Loss: 0.1777	LR: 0.004000
Training Epoch: 26 [6912/10020]	Loss: 0.1677	LR: 0.004000
Training Epoch: 26 [7168/10020]	Loss: 0.2357	LR: 0.004000
Training Epoch: 26 [7424/10020]	Loss: 0.2706	LR: 0.004000
Training Epoch: 26 [7680/10020]	Loss: 0.1789	LR: 0.004000
Training Epoch: 26 [7936/10020]	Loss: 0.1931	LR: 0.004000
Training Epoch: 26 [8192/10020]	Loss: 0.1246	LR: 0.004000
Training Epoch: 26 [8448/10020]	Loss: 0.1821	LR: 0.004000
Training Epoch: 26 [8704/10020]	Loss: 0.1619	LR: 0.004000
Training Epoch: 26 [8960/10020]	Loss: 0.1879	LR: 0.004000
Training Epoch: 26 [9216/10020]	Loss: 0.1727	LR: 0.004000
Training Epoch: 26 [9472/10020]	Loss: 0.2275	LR: 0.004000
Training Epoch: 26 [9728/10020]	Loss: 0.1946	LR: 0.004000
Training Epoch: 26 [9984/10020]	Loss: 0.1907	LR: 0.004000
Training Epoch: 26 [10020/10020]	Loss: 0.1930	LR: 0.004000
Epoch 26 - Average Train Loss: 0.1956, Train Accuracy: 0.9194
Epoch 26 training time consumed: 145.34s
Evaluating Network.....
Test set: Epoch: 26, Average loss: 0.0008, Accuracy: 0.9215, Time consumed:8.10s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-26-best.pth
Training Epoch: 27 [256/10020]	Loss: 0.1570	LR: 0.004000
Training Epoch: 27 [512/10020]	Loss: 0.1183	LR: 0.004000
Training Epoch: 27 [768/10020]	Loss: 0.2051	LR: 0.004000
Training Epoch: 27 [1024/10020]	Loss: 0.2079	LR: 0.004000
Training Epoch: 27 [1280/10020]	Loss: 0.1640	LR: 0.004000
Training Epoch: 27 [1536/10020]	Loss: 0.1910	LR: 0.004000
Training Epoch: 27 [1792/10020]	Loss: 0.1929	LR: 0.004000
Training Epoch: 27 [2048/10020]	Loss: 0.2282	LR: 0.004000
Training Epoch: 27 [2304/10020]	Loss: 0.2175	LR: 0.004000
Training Epoch: 27 [2560/10020]	Loss: 0.1574	LR: 0.004000
Training Epoch: 27 [2816/10020]	Loss: 0.2092	LR: 0.004000
Training Epoch: 27 [3072/10020]	Loss: 0.1688	LR: 0.004000
Training Epoch: 27 [3328/10020]	Loss: 0.2211	LR: 0.004000
Training Epoch: 27 [3584/10020]	Loss: 0.1765	LR: 0.004000
Training Epoch: 27 [3840/10020]	Loss: 0.1853	LR: 0.004000
Training Epoch: 27 [4096/10020]	Loss: 0.2132	LR: 0.004000
Training Epoch: 27 [4352/10020]	Loss: 0.2000	LR: 0.004000
Training Epoch: 27 [4608/10020]	Loss: 0.2312	LR: 0.004000
Training Epoch: 27 [4864/10020]	Loss: 0.1671	LR: 0.004000
Training Epoch: 27 [5120/10020]	Loss: 0.1705	LR: 0.004000
Training Epoch: 27 [5376/10020]	Loss: 0.1620	LR: 0.004000
Training Epoch: 27 [5632/10020]	Loss: 0.1789	LR: 0.004000
Training Epoch: 27 [5888/10020]	Loss: 0.1946	LR: 0.004000
Training Epoch: 27 [6144/10020]	Loss: 0.2312	LR: 0.004000
Training Epoch: 27 [6400/10020]	Loss: 0.2054	LR: 0.004000
Training Epoch: 27 [6656/10020]	Loss: 0.1384	LR: 0.004000
Training Epoch: 27 [6912/10020]	Loss: 0.2030	LR: 0.004000
Training Epoch: 27 [7168/10020]	Loss: 0.2063	LR: 0.004000
Training Epoch: 27 [7424/10020]	Loss: 0.1686	LR: 0.004000
Training Epoch: 27 [7680/10020]	Loss: 0.1920	LR: 0.004000
Training Epoch: 27 [7936/10020]	Loss: 0.1878	LR: 0.004000
Training Epoch: 27 [8192/10020]	Loss: 0.1763	LR: 0.004000
Training Epoch: 27 [8448/10020]	Loss: 0.1926	LR: 0.004000
Training Epoch: 27 [8704/10020]	Loss: 0.2028	LR: 0.004000
Training Epoch: 27 [8960/10020]	Loss: 0.2364	LR: 0.004000
Training Epoch: 27 [9216/10020]	Loss: 0.1573	LR: 0.004000
Training Epoch: 27 [9472/10020]	Loss: 0.1588	LR: 0.004000
Training Epoch: 27 [9728/10020]	Loss: 0.1824	LR: 0.004000
Training Epoch: 27 [9984/10020]	Loss: 0.1423	LR: 0.004000
Training Epoch: 27 [10020/10020]	Loss: 0.0841	LR: 0.004000
Epoch 27 - Average Train Loss: 0.1868, Train Accuracy: 0.9217
Epoch 27 training time consumed: 144.76s
Evaluating Network.....
Test set: Epoch: 27, Average loss: 0.0009, Accuracy: 0.9075, Time consumed:7.95s
Training Epoch: 28 [256/10020]	Loss: 0.1403	LR: 0.004000
Training Epoch: 28 [512/10020]	Loss: 0.1851	LR: 0.004000
Training Epoch: 28 [768/10020]	Loss: 0.2195	LR: 0.004000
Training Epoch: 28 [1024/10020]	Loss: 0.2034	LR: 0.004000
Training Epoch: 28 [1280/10020]	Loss: 0.1975	LR: 0.004000
Training Epoch: 28 [1536/10020]	Loss: 0.1971	LR: 0.004000
Training Epoch: 28 [1792/10020]	Loss: 0.2220	LR: 0.004000
Training Epoch: 28 [2048/10020]	Loss: 0.1261	LR: 0.004000
Training Epoch: 28 [2304/10020]	Loss: 0.1651	LR: 0.004000
Training Epoch: 28 [2560/10020]	Loss: 0.1649	LR: 0.004000
Training Epoch: 28 [2816/10020]	Loss: 0.1728	LR: 0.004000
Training Epoch: 28 [3072/10020]	Loss: 0.1911	LR: 0.004000
Training Epoch: 28 [3328/10020]	Loss: 0.1884	LR: 0.004000
Training Epoch: 28 [3584/10020]	Loss: 0.1510	LR: 0.004000
Training Epoch: 28 [3840/10020]	Loss: 0.2044	LR: 0.004000
Training Epoch: 28 [4096/10020]	Loss: 0.1687	LR: 0.004000
Training Epoch: 28 [4352/10020]	Loss: 0.1801	LR: 0.004000
Training Epoch: 28 [4608/10020]	Loss: 0.1872	LR: 0.004000
Training Epoch: 28 [4864/10020]	Loss: 0.1778	LR: 0.004000
Training Epoch: 28 [5120/10020]	Loss: 0.1483	LR: 0.004000
Training Epoch: 28 [5376/10020]	Loss: 0.2448	LR: 0.004000
Training Epoch: 28 [5632/10020]	Loss: 0.1511	LR: 0.004000
Training Epoch: 28 [5888/10020]	Loss: 0.1432	LR: 0.004000
Training Epoch: 28 [6144/10020]	Loss: 0.1452	LR: 0.004000
Training Epoch: 28 [6400/10020]	Loss: 0.1800	LR: 0.004000
Training Epoch: 28 [6656/10020]	Loss: 0.1621	LR: 0.004000
Training Epoch: 28 [6912/10020]	Loss: 0.2087	LR: 0.004000
Training Epoch: 28 [7168/10020]	Loss: 0.1501	LR: 0.004000
Training Epoch: 28 [7424/10020]	Loss: 0.1752	LR: 0.004000
Training Epoch: 28 [7680/10020]	Loss: 0.1189	LR: 0.004000
Training Epoch: 28 [7936/10020]	Loss: 0.1718	LR: 0.004000
Training Epoch: 28 [8192/10020]	Loss: 0.2099	LR: 0.004000
Training Epoch: 28 [8448/10020]	Loss: 0.2276	LR: 0.004000
Training Epoch: 28 [8704/10020]	Loss: 0.1830	LR: 0.004000
Training Epoch: 28 [8960/10020]	Loss: 0.1830	LR: 0.004000
Training Epoch: 28 [9216/10020]	Loss: 0.1797	LR: 0.004000
Training Epoch: 28 [9472/10020]	Loss: 0.1774	LR: 0.004000
Training Epoch: 28 [9728/10020]	Loss: 0.1513	LR: 0.004000
Training Epoch: 28 [9984/10020]	Loss: 0.1819	LR: 0.004000
Training Epoch: 28 [10020/10020]	Loss: 0.0592	LR: 0.004000
Epoch 28 - Average Train Loss: 0.1774, Train Accuracy: 0.9281
Epoch 28 training time consumed: 145.76s
Evaluating Network.....
Test set: Epoch: 28, Average loss: 0.0008, Accuracy: 0.9196, Time consumed:8.10s
Training Epoch: 29 [256/10020]	Loss: 0.1425	LR: 0.004000
Training Epoch: 29 [512/10020]	Loss: 0.1651	LR: 0.004000
Training Epoch: 29 [768/10020]	Loss: 0.1251	LR: 0.004000
Training Epoch: 29 [1024/10020]	Loss: 0.1347	LR: 0.004000
Training Epoch: 29 [1280/10020]	Loss: 0.2073	LR: 0.004000
Training Epoch: 29 [1536/10020]	Loss: 0.1249	LR: 0.004000
Training Epoch: 29 [1792/10020]	Loss: 0.1755	LR: 0.004000
Training Epoch: 29 [2048/10020]	Loss: 0.1658	LR: 0.004000
Training Epoch: 29 [2304/10020]	Loss: 0.1538	LR: 0.004000
Training Epoch: 29 [2560/10020]	Loss: 0.1983	LR: 0.004000
Training Epoch: 29 [2816/10020]	Loss: 0.1564	LR: 0.004000
Training Epoch: 29 [3072/10020]	Loss: 0.1595	LR: 0.004000
Training Epoch: 29 [3328/10020]	Loss: 0.2695	LR: 0.004000
Training Epoch: 29 [3584/10020]	Loss: 0.1989	LR: 0.004000
Training Epoch: 29 [3840/10020]	Loss: 0.1939	LR: 0.004000
Training Epoch: 29 [4096/10020]	Loss: 0.1671	LR: 0.004000
Training Epoch: 29 [4352/10020]	Loss: 0.1583	LR: 0.004000
Training Epoch: 29 [4608/10020]	Loss: 0.1861	LR: 0.004000
Training Epoch: 29 [4864/10020]	Loss: 0.1790	LR: 0.004000
Training Epoch: 29 [5120/10020]	Loss: 0.1992	LR: 0.004000
Training Epoch: 29 [5376/10020]	Loss: 0.1817	LR: 0.004000
Training Epoch: 29 [5632/10020]	Loss: 0.2296	LR: 0.004000
Training Epoch: 29 [5888/10020]	Loss: 0.2084	LR: 0.004000
Training Epoch: 29 [6144/10020]	Loss: 0.1493	LR: 0.004000
Training Epoch: 29 [6400/10020]	Loss: 0.1645	LR: 0.004000
Training Epoch: 29 [6656/10020]	Loss: 0.2644	LR: 0.004000
Training Epoch: 29 [6912/10020]	Loss: 0.1655	LR: 0.004000
Training Epoch: 29 [7168/10020]	Loss: 0.1412	LR: 0.004000
Training Epoch: 29 [7424/10020]	Loss: 0.1389	LR: 0.004000
Training Epoch: 29 [7680/10020]	Loss: 0.1636	LR: 0.004000
Training Epoch: 29 [7936/10020]	Loss: 0.1470	LR: 0.004000
Training Epoch: 29 [8192/10020]	Loss: 0.1401	LR: 0.004000
Training Epoch: 29 [8448/10020]	Loss: 0.2052	LR: 0.004000
Training Epoch: 29 [8704/10020]	Loss: 0.2161	LR: 0.004000
Training Epoch: 29 [8960/10020]	Loss: 0.1620	LR: 0.004000
Training Epoch: 29 [9216/10020]	Loss: 0.2107	LR: 0.004000
Training Epoch: 29 [9472/10020]	Loss: 0.1747	LR: 0.004000
Training Epoch: 29 [9728/10020]	Loss: 0.1889	LR: 0.004000
Training Epoch: 29 [9984/10020]	Loss: 0.1787	LR: 0.004000
Training Epoch: 29 [10020/10020]	Loss: 0.2103	LR: 0.004000
Epoch 29 - Average Train Loss: 0.1768, Train Accuracy: 0.9243
Epoch 29 training time consumed: 145.23s
Evaluating Network.....
Test set: Epoch: 29, Average loss: 0.0007, Accuracy: 0.9264, Time consumed:8.07s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_16h_27m_35s/ResNet18-MUCAC-seed2-ret50-29-best.pth
Training Epoch: 30 [256/10020]	Loss: 0.1798	LR: 0.004000
Training Epoch: 30 [512/10020]	Loss: 0.2166	LR: 0.004000
Training Epoch: 30 [768/10020]	Loss: 0.2057	LR: 0.004000
Training Epoch: 30 [1024/10020]	Loss: 0.1978	LR: 0.004000
Training Epoch: 30 [1280/10020]	Loss: 0.2106	LR: 0.004000
Training Epoch: 30 [1536/10020]	Loss: 0.1946	LR: 0.004000
Training Epoch: 30 [1792/10020]	Loss: 0.1822	LR: 0.004000
Training Epoch: 30 [2048/10020]	Loss: 0.1719	LR: 0.004000
Training Epoch: 30 [2304/10020]	Loss: 0.1590	LR: 0.004000
Training Epoch: 30 [2560/10020]	Loss: 0.1948	LR: 0.004000
Training Epoch: 30 [2816/10020]	Loss: 0.1635	LR: 0.004000
Training Epoch: 30 [3072/10020]	Loss: 0.1519	LR: 0.004000
Training Epoch: 30 [3328/10020]	Loss: 0.1855	LR: 0.004000
Training Epoch: 30 [3584/10020]	Loss: 0.1663	LR: 0.004000
Training Epoch: 30 [3840/10020]	Loss: 0.2018	LR: 0.004000
Training Epoch: 30 [4096/10020]	Loss: 0.1228	LR: 0.004000
Training Epoch: 30 [4352/10020]	Loss: 0.1577	LR: 0.004000
Training Epoch: 30 [4608/10020]	Loss: 0.1773	LR: 0.004000
Training Epoch: 30 [4864/10020]	Loss: 0.1414	LR: 0.004000
Training Epoch: 30 [5120/10020]	Loss: 0.2238	LR: 0.004000
Training Epoch: 30 [5376/10020]	Loss: 0.1971	LR: 0.004000
Training Epoch: 30 [5632/10020]	Loss: 0.1198	LR: 0.004000
Training Epoch: 30 [5888/10020]	Loss: 0.1937	LR: 0.004000
Training Epoch: 30 [6144/10020]	Loss: 0.1775	LR: 0.004000
Training Epoch: 30 [6400/10020]	Loss: 0.1656	LR: 0.004000
Training Epoch: 30 [6656/10020]	Loss: 0.1677	LR: 0.004000
Training Epoch: 30 [6912/10020]	Loss: 0.1748	LR: 0.004000
Training Epoch: 30 [7168/10020]	Loss: 0.1588	LR: 0.004000
Training Epoch: 30 [7424/10020]	Loss: 0.1114	LR: 0.004000
Training Epoch: 30 [7680/10020]	Loss: 0.1709	LR: 0.004000
Training Epoch: 30 [7936/10020]	Loss: 0.1659	LR: 0.004000
Training Epoch: 30 [8192/10020]	Loss: 0.1925	LR: 0.004000
Training Epoch: 30 [8448/10020]	Loss: 0.1623	LR: 0.004000
Training Epoch: 30 [8704/10020]	Loss: 0.2097	LR: 0.004000
Training Epoch: 30 [8960/10020]	Loss: 0.2429	LR: 0.004000
Training Epoch: 30 [9216/10020]	Loss: 0.1640	LR: 0.004000
Training Epoch: 30 [9472/10020]	Loss: 0.1788	LR: 0.004000
Training Epoch: 30 [9728/10020]	Loss: 0.1760	LR: 0.004000
Training Epoch: 30 [9984/10020]	Loss: 0.2013	LR: 0.004000
Training Epoch: 30 [10020/10020]	Loss: 0.1755	LR: 0.004000
Epoch 30 - Average Train Loss: 0.1778, Train Accuracy: 0.9259
Epoch 30 training time consumed: 145.40s
Evaluating Network.....
Test set: Epoch: 30, Average loss: 0.0008, Accuracy: 0.9240, Time consumed:8.10s
Training Epoch: 31 [256/10020]	Loss: 0.1722	LR: 0.004000
Training Epoch: 31 [512/10020]	Loss: 0.1637	LR: 0.004000
Training Epoch: 31 [768/10020]	Loss: 0.1783	LR: 0.004000
Training Epoch: 31 [1024/10020]	Loss: 0.1672	LR: 0.004000
Training Epoch: 31 [1280/10020]	Loss: 0.2390	LR: 0.004000
Training Epoch: 31 [1536/10020]	Loss: 0.1244	LR: 0.004000
Training Epoch: 31 [1792/10020]	Loss: 0.2387	LR: 0.004000
Training Epoch: 31 [2048/10020]	Loss: 0.1874	LR: 0.004000
Training Epoch: 31 [2304/10020]	Loss: 0.1681	LR: 0.004000
Training Epoch: 31 [2560/10020]	Loss: 0.1996	LR: 0.004000
Training Epoch: 31 [2816/10020]	Loss: 0.1611	LR: 0.004000
Training Epoch: 31 [3072/10020]	Loss: 0.1571	LR: 0.004000
Training Epoch: 31 [3328/10020]	Loss: 0.1612	LR: 0.004000
Training Epoch: 31 [3584/10020]	Loss: 0.2203	LR: 0.004000
Training Epoch: 31 [3840/10020]	Loss: 0.1395	LR: 0.004000
Training Epoch: 31 [4096/10020]	Loss: 0.1618	LR: 0.004000
Training Epoch: 31 [4352/10020]	Loss: 0.1879	LR: 0.004000
Training Epoch: 31 [4608/10020]	Loss: 0.1842	LR: 0.004000
Training Epoch: 31 [4864/10020]	Loss: 0.1441	LR: 0.004000
Training Epoch: 31 [5120/10020]	Loss: 0.2150	LR: 0.004000
Training Epoch: 31 [5376/10020]	Loss: 0.2302	LR: 0.004000
Training Epoch: 31 [5632/10020]	Loss: 0.1861	LR: 0.004000
Training Epoch: 31 [5888/10020]	Loss: 0.1742	LR: 0.004000
Training Epoch: 31 [6144/10020]	Loss: 0.1804	LR: 0.004000
Training Epoch: 31 [6400/10020]	Loss: 0.1621	LR: 0.004000
Training Epoch: 31 [6656/10020]	Loss: 0.1344	LR: 0.004000
Training Epoch: 31 [6912/10020]	Loss: 0.1515	LR: 0.004000
Training Epoch: 31 [7168/10020]	Loss: 0.2482	LR: 0.004000
Training Epoch: 31 [7424/10020]	Loss: 0.1977	LR: 0.004000
Training Epoch: 31 [7680/10020]	Loss: 0.1793	LR: 0.004000
Training Epoch: 31 [7936/10020]	Loss: 0.1827	LR: 0.004000
Training Epoch: 31 [8192/10020]	Loss: 0.1271	LR: 0.004000
Training Epoch: 31 [8448/10020]	Loss: 0.1726	LR: 0.004000
Training Epoch: 31 [8704/10020]	Loss: 0.1413	LR: 0.004000
Training Epoch: 31 [8960/10020]	Loss: 0.1745	LR: 0.004000
Training Epoch: 31 [9216/10020]	Loss: 0.1641	LR: 0.004000
Training Epoch: 31 [9472/10020]	Loss: 0.1585	LR: 0.004000
Training Epoch: 31 [9728/10020]	Loss: 0.1835	LR: 0.004000
Training Epoch: 31 [9984/10020]	Loss: 0.2498	LR: 0.004000
Training Epoch: 31 [10020/10020]	Loss: 0.0489	LR: 0.004000
Epoch 31 - Average Train Loss: 0.1782, Train Accuracy: 0.9274
Epoch 31 training time consumed: 144.66s
Evaluating Network.....
Test set: Epoch: 31, Average loss: 0.0008, Accuracy: 0.9264, Time consumed:7.91s
Valid (Test) Dl:  2065
Train Dl:  10548
Retain Train Dl:  10020
Forget Train Dl:  528
Retain Valid Dl:  10020
Forget Valid Dl:  528
retain_prob Distribution: 2065 samples
test_prob Distribution: 2065 samples
forget_prob Distribution: 528 samples
Set1 Distribution: 528 samples
Set2 Distribution: 528 samples
Set1 Distribution: 528 samples
Set2 Distribution: 528 samples
Set1 Distribution: 2065 samples
Set2 Distribution: 2065 samples
Set1 Distribution: 2065 samples
Set2 Distribution: 2065 samples
Test Accuracy: 92.79258728027344
Retain Accuracy: 92.04753112792969
Zero-Retain Forget (ZRF): 0.8412534594535828
Membership Inference Attack (MIA): 0.3939393939393939
Forget vs Retain Membership Inference Attack (MIA): 0.46226415094339623
Forget vs Test Membership Inference Attack (MIA): 0.5566037735849056
Test vs Retain Membership Inference Attack (MIA): 0.5242130750605327
Train vs Test Membership Inference Attack (MIA): 0.5254237288135594
Forget Set Accuracy (Df): 93.359375
Method Execution Time: 5886.84 seconds
